BTRFS is working well at the moment on a single disk, or with RAID1C3 (metadata also RAID1C3 as 2 disks can fail) in a NAS environment with more than 3 disks (ie 4+ disks). For 2 or 3 disks RAID1 is preferred which also works well (data is duplicated so it can still self heal and 1 disk can fail). Note: For RAID 6 to avoid loss of data/corruption on a power failure/kernel hang you must raid the metadata differently although RAID6 still has some potential issues so its not advised to use it. Its mostly ready but has a few edge cases where data can be lost they are fixing still. eg: imagine 6 disks the command to create the RAID6 array is:
mkfs.btrfs -L myraidlabel -m raid1c3 -d raid6 -f <device1> <device2> <device3> <device4> <device5> <device6>
However as RAID6 is not considered 100% safe yet its better to do something like:
mkfs.btrfs -L myraidlabel -m raid1c3 -d raid1c3 -f <device1> <device2> <device3> <device4> <device5> <device6>
This command destroys whats on the disks (-f was used = force creation).
Advantage of the first command is metadata is normally small while raid 6 gives the space and speed benefits for the data written. In the second command that is 100% safe you only get the space of 2/6 disks as 33% is usable (data is written 3 times in raid1c3).
Once you have more than 12 disks or 24TB across the disks RAID 6 is no longer considered appropriate/safe anyway due to URE so you should use only 1c3 or 1c4 for metadata and data. You can only use 33% or 25% of total disk space though of all drives. Some companies use Raid 10 which is 50% of the disk space with an even number of disks but only 1 disk failure is safe, so I personally dont use it or recommend it. ZFS might be a better choice at this number of disks (but its performance is slower unfortunatly) or more though so best do your research here… if you are unsure raid1c3 is pretty good as 2 disks can fail and its fairly performant. You just have to buy 3x the number of disks as only 33% of the total disks space is usable.
Dont forget you can convert raid by balance eg:
If the array is already created, you can convert to 1c3 or whatever with btrfs balance start -mconvert=raid1c3 /path/to/array.
-mconvert=… converts metadata while -dconvert=… converts data, and you can provide both at the same time if you want and so on.
A weekly scrub AND balance is suggested in a RAID environment so that self healing and filesystem maintenance is performed. For a single disk just a monthly scrub is fine and you can run it manually or schedule one with cron if you prefer. If there are any corrupt files, it cant self heal anyway so you would be restoring from backup, and balancing on a single disk generally happens automatically to a degree so I normally dont worry about it (only 1 copy of data is kept).
For single disks in a home PC I use the following mount options in FStab: defaults,compress=lzo,autodefrag,discard=async,space_cache=v2
So for example a disk would be mounted as such in the fstab: UUID=383732b1-5e87-4b68-a15a-f044bc559877 / btrfs defaults,compress=lzo,autodefrag,discard=async,space_cache=v2,subvol=@ 0 0
This keeps things nice and tidy and automatically handles trim, and balance. As mentioned there is no self healing on a single disk so you can run a scrub to check for errors every now and again but a backup is needed to restore files. Dont forget to dup metadata even on a single disk. That is in this wiki elsewhere if you dont know how.
Overall BTRFS is production ready now since Linux Kernel 6 onwards. There are a few gotchas but nothing major and if you use RAID1C3/4 data is kept safe.