Linux-BTRFS Archive on lore.kernel.org
 help / color / Atom feed
* RAID5 scrub performance
@ 2019-11-27 15:11 Jorge Bastos
  2019-11-28  0:01 ` Qu Wenruo
  0 siblings, 1 reply; 3+ messages in thread
From: Jorge Bastos @ 2019-11-27 15:11 UTC (permalink / raw)
  To: Btrfs BTRFS

I believe this is a known issue but wonder if there's something I can
do do optimize raid5 scrub speed, or if anything is in the works to
improve it.

kernel 5.3.8
btrfs-progs 5.3.1


Single disk filesystem is performing as expected:

UUID:             9c0ed213-d9c5-4e93-b9db-218b43533c15
Scrub started:    Tue Nov 26 21:58:20 2019
Status:           finished
Duration:         2:24:32
Total to scrub:   1.04TiB
Rate:             125.17MiB/s
Error summary:    no errors found



4 disk raid5 (raid1 metadata) on the same server using the same model
disks as above:

UUID:             b75ee8b5-ae1c-4395-aa39-bebf10993057
Scrub started:    Wed Nov 27 07:32:46 2019
Status:           running
Duration:         7:34:50
Time left:        1:52:37
ETA:              Wed Nov 27 17:00:18 2019
Total to scrub:   1.20TiB
Bytes scrubbed:   982.05GiB
Rate:             36.85MiB/s
Error summary:    no errors found



6 SSD raid5 (raid1 metadata) also on the same server, still slow for
SSDs but at least scrub performance is acceptable:

UUID:             e072aa60-33e2-4756-8496-c58cd8ba6053
Scrub started:    Wed Nov 27 15:08:31 2019
Status:           running
Duration:         0:01:40
Time left:        1:40:11
ETA:              Wed Nov 27 16:50:24 2019
Total to scrub:   3.24TiB
Bytes scrubbed:   54.37GiB
Rate:             556.73MiB/s
Error summary:    no errors found

I still have some reservations about btrfs raid5/6, so use mostly for
smaller filesystems for now, but this slow scrub performance will
result in multi-day scrubs for a large filesystem, which isn't very
practical.

Thanks,
Jorge

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: RAID5 scrub performance
  2019-11-27 15:11 RAID5 scrub performance Jorge Bastos
@ 2019-11-28  0:01 ` Qu Wenruo
  2019-11-28  9:24   ` Jorge Bastos
  0 siblings, 1 reply; 3+ messages in thread
From: Qu Wenruo @ 2019-11-28  0:01 UTC (permalink / raw)
  To: Jorge Bastos, Btrfs BTRFS

[-- Attachment #1.1: Type: text/plain, Size: 2077 bytes --]



On 2019/11/27 下午11:11, Jorge Bastos wrote:
> I believe this is a known issue but wonder if there's something I can
> do do optimize raid5 scrub speed, or if anything is in the works to
> improve it.
> 
> kernel 5.3.8
> btrfs-progs 5.3.1
> 
> 
> Single disk filesystem is performing as expected:
> 
> UUID:             9c0ed213-d9c5-4e93-b9db-218b43533c15
> Scrub started:    Tue Nov 26 21:58:20 2019
> Status:           finished
> Duration:         2:24:32
> Total to scrub:   1.04TiB
> Rate:             125.17MiB/s
> Error summary:    no errors found
> 
> 
> 
> 4 disk raid5 (raid1 metadata) on the same server using the same model
> disks as above:
> 
> UUID:             b75ee8b5-ae1c-4395-aa39-bebf10993057
> Scrub started:    Wed Nov 27 07:32:46 2019
> Status:           running
> Duration:         7:34:50
> Time left:        1:52:37
> ETA:              Wed Nov 27 17:00:18 2019
> Total to scrub:   1.20TiB
> Bytes scrubbed:   982.05GiB
> Rate:             36.85MiB/s
> Error summary:    no errors found
> 
> 
> 
> 6 SSD raid5 (raid1 metadata) also on the same server, still slow for
> SSDs but at least scrub performance is acceptable:
> 
> UUID:             e072aa60-33e2-4756-8496-c58cd8ba6053
> Scrub started:    Wed Nov 27 15:08:31 2019
> Status:           running
> Duration:         0:01:40
> Time left:        1:40:11
> ETA:              Wed Nov 27 16:50:24 2019
> Total to scrub:   3.24TiB
> Bytes scrubbed:   54.37GiB
> Rate:             556.73MiB/s
> Error summary:    no errors found
> 
> I still have some reservations about btrfs raid5/6, so use mostly for
> smaller filesystems for now, but this slow scrub performance will
> result in multi-day scrubs for a large filesystem, which isn't very
> practical.

Btrfs uses a not-so-optimal way for multi-disks scrub:
Queuing scrub for each disk at the same time.

So it's common to cause a lot of race and even conflicting seek requests.

Have you tried to only scrub one disk for such case?

Thanks,
Qu

> 
> Thanks,
> Jorge
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: RAID5 scrub performance
  2019-11-28  0:01 ` Qu Wenruo
@ 2019-11-28  9:24   ` Jorge Bastos
  0 siblings, 0 replies; 3+ messages in thread
From: Jorge Bastos @ 2019-11-28  9:24 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

HI,

Thanks for the reply, but I'm not sure I understand, if I start the
scrub for a single device on the raid5 pool it still scrubs the whole
filesystem, and speeds are the same.

Jorge




On Thu, Nov 28, 2019 at 12:01 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
>
> On 2019/11/27 下午11:11, Jorge Bastos wrote:
> > I believe this is a known issue but wonder if there's something I can
> > do do optimize raid5 scrub speed, or if anything is in the works to
> > improve it.
> >
> > kernel 5.3.8
> > btrfs-progs 5.3.1
> >
> >
> > Single disk filesystem is performing as expected:
> >
> > UUID:             9c0ed213-d9c5-4e93-b9db-218b43533c15
> > Scrub started:    Tue Nov 26 21:58:20 2019
> > Status:           finished
> > Duration:         2:24:32
> > Total to scrub:   1.04TiB
> > Rate:             125.17MiB/s
> > Error summary:    no errors found
> >
> >
> >
> > 4 disk raid5 (raid1 metadata) on the same server using the same model
> > disks as above:
> >
> > UUID:             b75ee8b5-ae1c-4395-aa39-bebf10993057
> > Scrub started:    Wed Nov 27 07:32:46 2019
> > Status:           running
> > Duration:         7:34:50
> > Time left:        1:52:37
> > ETA:              Wed Nov 27 17:00:18 2019
> > Total to scrub:   1.20TiB
> > Bytes scrubbed:   982.05GiB
> > Rate:             36.85MiB/s
> > Error summary:    no errors found
> >
> >
> >
> > 6 SSD raid5 (raid1 metadata) also on the same server, still slow for
> > SSDs but at least scrub performance is acceptable:
> >
> > UUID:             e072aa60-33e2-4756-8496-c58cd8ba6053
> > Scrub started:    Wed Nov 27 15:08:31 2019
> > Status:           running
> > Duration:         0:01:40
> > Time left:        1:40:11
> > ETA:              Wed Nov 27 16:50:24 2019
> > Total to scrub:   3.24TiB
> > Bytes scrubbed:   54.37GiB
> > Rate:             556.73MiB/s
> > Error summary:    no errors found
> >
> > I still have some reservations about btrfs raid5/6, so use mostly for
> > smaller filesystems for now, but this slow scrub performance will
> > result in multi-day scrubs for a large filesystem, which isn't very
> > practical.
>
> Btrfs uses a not-so-optimal way for multi-disks scrub:
> Queuing scrub for each disk at the same time.
>
> So it's common to cause a lot of race and even conflicting seek requests.
>
> Have you tried to only scrub one disk for such case?
>
> Thanks,
> Qu
>
> >
> > Thanks,
> > Jorge
> >
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, back to index

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-27 15:11 RAID5 scrub performance Jorge Bastos
2019-11-28  0:01 ` Qu Wenruo
2019-11-28  9:24   ` Jorge Bastos

Linux-BTRFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-btrfs/0 linux-btrfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-btrfs linux-btrfs/ https://lore.kernel.org/linux-btrfs \
		linux-btrfs@vger.kernel.org
	public-inbox-index linux-btrfs

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-btrfs


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git