linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Btrfs RAID-10 performance
@ 2020-09-03 13:13 Miloslav Hůla
  2020-09-03 14:56 ` A L
  0 siblings, 1 reply; 2+ messages in thread
From: Miloslav Hůla @ 2020-09-03 13:13 UTC (permalink / raw)
  To: linux-btrfs

Hello,

we are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro 
server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of RAM. 
We run 'btrfs scrub start -B -d /data' every Sunday as a cron task. It 
takes about 50 minutes to finish.

# uname -a
Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 
GNU/Linux

RAID is a composition of 16 harddrives. Harddrives are connected via 
AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are SAS 
2.5" 15k drives.

Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts, 
Mailbox format, LMTP delivery.

We run 'rsync' to remote NAS daily. It takes about 6.5 hours to finish, 
12'265'387 files last night.


Last half year, we encoutered into performace troubles. Server load 
grows up to 30 in rush hours, due to IO waits. We tried to attach next 
harddrives (the 838G ones in a list below) and increase a free space by 
rebalace. I think, it helped a little bit, not not so rapidly.

Is this a reasonable setup and use case for btrfs RAID-10? If so, are 
there some recommendations to achieve better performance?

Thank you. With kind regards
Milo



# megaclisas-status
-- Controller information --
-- ID | H/W Model                  | RAM    | Temp | BBU    | Firmware
c0    | AVAGO MegaRAID SAS 9361-8i | 1024MB | 72C  | Good   | FW: 
24.16.0-0082

-- Array information --
-- ID | Type   |    Size |  Strpsz | Flags | DskCache |   Status |  OS 
Path | CacheCade |InProgress
c0u0  | RAID-0 |    838G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdq | None      |None
c0u1  | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sda | None      |None
c0u2  | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdb | None      |None
c0u3  | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdc | None      |None
c0u4  | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdd | None      |None
c0u5  | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sde | None      |None
c0u6  | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdf | None      |None
c0u7  | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdg | None      |None
c0u8  | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdh | None      |None
c0u9  | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdi | None      |None
c0u10 | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdj | None      |None
c0u11 | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdk | None      |None
c0u12 | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdl | None      |None
c0u13 | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdm | None      |None
c0u14 | RAID-0 |    558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdn | None      |None
c0u15 | RAID-0 |    838G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdr | None      |None

-- Disk information --
-- ID   | Type | Drive Model                       | Size     | Status 
        | Speed    | Temp | Slot ID  | LSI ID
c0u0p0  | HDD  | SEAGATE ST900MP0006 N003WAG0Q3S3  | 837.8 Gb | Online, 
Spun Up | 12.0Gb/s | 53C  | [8:14]   | 32
c0u1p0  | HDD  | HGST HUC156060CSS200 A3800XV250TJ | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 38C  | [8:0]    | 12
c0u2p0  | HDD  | HGST HUC156060CSS200 A3800XV3XT4J | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 43C  | [8:1]    | 11
c0u3p0  | HDD  | HGST HUC156060CSS200 ADB05ZG4XLZU | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 46C  | [8:2]    | 25
c0u4p0  | HDD  | HGST HUC156060CSS200 A3800XV3DWRL | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 48C  | [8:3]    | 14
c0u5p0  | HDD  | HGST HUC156060CSS200 A3800XV3XZTL | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 52C  | [8:4]    | 18
c0u6p0  | HDD  | HGST HUC156060CSS200 A3800XV3VSKJ | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 55C  | [8:5]    | 15
c0u7p0  | HDD  | SEAGATE ST600MP0006 N003WAF1LWKE  | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 56C  | [8:6]    | 28
c0u8p0  | HDD  | HGST HUC156060CSS200 A3800XV3XTDJ | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 55C  | [8:7]    | 20
c0u9p0  | HDD  | HGST HUC156060CSS200 A3800XV3T8XL | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 57C  | [8:8]    | 19
c0u10p0 | HDD  | HGST HUC156060CSS200 A7030XHL0ZYP | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 61C  | [8:9]    | 23
c0u11p0 | HDD  | HGST HUC156060CSS200 ADB05ZG4VR3P | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 60C  | [8:10]   | 24
c0u12p0 | HDD  | SEAGATE ST600MP0006 N003WAF195KA  | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 60C  | [8:11]   | 29
c0u13p0 | HDD  | SEAGATE ST600MP0006 N003WAF1LTZW  | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 56C  | [8:12]   | 26
c0u14p0 | HDD  | SEAGATE ST600MP0006 N003WAF1LWH6  | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 55C  | [8:13]   | 27
c0u15p0 | HDD  | SEAGATE ST900MP0006 N003WAG0Q414  | 837.8 Gb | Online, 
Spun Up | 12.0Gb/s | 47C  | [8:15]   | 33



# btrfs --version
btrfs-progs v4.7.3



# btrfs fi show
Label: 'DATA'  uuid: 5b285a46-e55d-4191-924f-0884fa06edd8
         Total devices 16 FS bytes used 3.49TiB
         devid    1 size 558.41GiB used 448.66GiB path /dev/sda
         devid    2 size 558.41GiB used 448.66GiB path /dev/sdb
         devid    4 size 558.41GiB used 448.66GiB path /dev/sdd
         devid    5 size 558.41GiB used 448.66GiB path /dev/sde
         devid    7 size 558.41GiB used 448.66GiB path /dev/sdg
         devid    8 size 558.41GiB used 448.66GiB path /dev/sdh
         devid    9 size 558.41GiB used 448.66GiB path /dev/sdf
         devid   10 size 558.41GiB used 448.66GiB path /dev/sdi
         devid   11 size 558.41GiB used 448.66GiB path /dev/sdj
         devid   13 size 558.41GiB used 448.66GiB path /dev/sdk
         devid   14 size 558.41GiB used 448.66GiB path /dev/sdc
         devid   15 size 558.41GiB used 448.66GiB path /dev/sdl
         devid   16 size 558.41GiB used 448.66GiB path /dev/sdm
         devid   17 size 558.41GiB used 448.66GiB path /dev/sdn
         devid   18 size 837.84GiB used 448.66GiB path /dev/sdr
         devid   19 size 837.84GiB used 448.66GiB path /dev/sdq



# btrfs fi df /data/
Data, RAID10: total=3.48TiB, used=3.47TiB
System, RAID10: total=256.00MiB, used=320.00KiB
Metadata, RAID10: total=21.00GiB, used=18.17GiB
GlobalReserve, single: total=512.00MiB, used=0.00B



I do not attach whole dmesg log. It is almost empty, without errors. 
Only lines about BTRFS are about relocations, like:

BTRFS info (device sda): relocating block group 29435663220736 flags 65
BTRFS info (device sda): found 54460 extents
BTRFS info (device sda): found 54459 extents

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Btrfs RAID-10 performance
  2020-09-03 13:13 Btrfs RAID-10 performance Miloslav Hůla
@ 2020-09-03 14:56 ` A L
  0 siblings, 0 replies; 2+ messages in thread
From: A L @ 2020-09-03 14:56 UTC (permalink / raw)
  To: Miloslav Hůla, linux-btrfs


On 2020-09-03 15:13, Miloslav Hůla wrote:
> Hello,
>
> we are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro 
> server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of 
> RAM. We run 'btrfs scrub start -B -d /data' every Sunday as a cron 
> task. It takes about 50 minutes to finish.
>
> # uname -a
> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 
> GNU/Linux
>
> RAID is a composition of 16 harddrives. Harddrives are connected via 
> AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are SAS 
> 2.5" 15k drives.
>
> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts, 
> Mailbox format, LMTP delivery.
>
> We run 'rsync' to remote NAS daily. It takes about 6.5 hours to 
> finish, 12'265'387 files last night.
>
>
> Last half year, we encoutered into performace troubles. Server load 
> grows up to 30 in rush hours, due to IO waits. We tried to attach next 
> harddrives (the 838G ones in a list below) and increase a free space 
> by rebalace. I think, it helped a little bit, not not so rapidly.
>
> Is this a reasonable setup and use case for btrfs RAID-10? If so, are 
> there some recommendations to achieve better performance?
>
> Thank you. With kind regards
> Milo
>
Hi,

I think that with your use-case of  lots of concurrent reads and writes, 
It may be better to use using RAID1. This would ensure two copies on 
separate disks for all chunks of data. In turn, this means 8 parallel 
reads/writes instead of only 4 with RAID10. With RAID10 you will engage 
4 drives for every read/write.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-09-03 14:58 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-03 13:13 Btrfs RAID-10 performance Miloslav Hůla
2020-09-03 14:56 ` A L

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).