All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
To: "Konstantin V. Gavrilenko" <k.gavrilenko@arhont.com>,
	Roman Mamedov <rm@romanrm.net>
Cc: Marat Khalili <mkh@rqc.ru>,
	linux-btrfs@vger.kernel.org,
	Peter Grandi <pg@btrfs.list.sabi.co.uk>
Subject: Re: slow btrfs with a single kworker process using 100% CPU
Date: Wed, 16 Aug 2017 14:46:34 +0200	[thread overview]
Message-ID: <c750108a-f769-9061-93fa-ce0843989495@profihost.ag> (raw)
In-Reply-To: <31057849.442.1502886546489.JavaMail.gkos@dynomob>


Am 16.08.2017 um 14:29 schrieb Konstantin V. Gavrilenko:
> Roman, initially I had a single process occupying 100% CPU, when sysrq it was indicating as "btrfs_find_space_for_alloc"
> but that's when I used the autodefrag, compress, forcecompress and commit=10 mount flags and space_cache was v1 by default.
> when I switched to "relatime,compress-force=zlib,space_cache=v2" the 100% cpu has dissapeared, but the shite performance remained.
> 
> 
> As to the chunk size, there is no information in the article about the type of data that was used. While in our case we are pretty certain about the compressed block size (32-128). I am currently inclining towards 32k as it might be ideal in a situation when we have a 5 disk raid5 array.
> 
> In theory
> 1. The minimum compressed write (32k) would fill the chunk on a single disk, thus the IO cost of the operation would be 2 reads (original chunk + original parity)  and 2 writes (new chunk + new parity)
> 
> 2. The maximum compressed write (128k) would require the update of 1 chunk on each of the 4 data disks + 1 parity  write 
> 
> 
> 
> Stefan what mount flags do you use?

noatime,compress-force=zlib,noacl,space_cache,skip_balance,subvolid=5,subvol=/

Greets,
Stefan


> kos
> 
> 
> 
> ----- Original Message -----
> From: "Roman Mamedov" <rm@romanrm.net>
> To: "Konstantin V. Gavrilenko" <k.gavrilenko@arhont.com>
> Cc: "Stefan Priebe - Profihost AG" <s.priebe@profihost.ag>, "Marat Khalili" <mkh@rqc.ru>, linux-btrfs@vger.kernel.org, "Peter Grandi" <pg@btrfs.list.sabi.co.uk>
> Sent: Wednesday, 16 August, 2017 2:00:03 PM
> Subject: Re: slow btrfs with a single kworker process using 100% CPU
> 
> On Wed, 16 Aug 2017 12:48:42 +0100 (BST)
> "Konstantin V. Gavrilenko" <k.gavrilenko@arhont.com> wrote:
> 
>> I believe the chunk size of 512kb is even worth for performance then the default settings on my HW RAID of  256kb.
> 
> It might be, but that does not explain the original problem reported at all.
> If mdraid performance would be the bottleneck, you would see high iowait,
> possibly some CPU load from the mdX_raidY threads. But not a single Btrfs
> thread pegging into 100% CPU.
> 
>> So now I am moving the data from the array and will be rebuilding it with 64
>> or 32 chunk size and checking the performance.
> 
> 64K is the sweet spot for RAID5/6:
> http://louwrentius.com/linux-raid-level-and-chunk-size-the-benchmarks.html
> 

  parent reply	other threads:[~2017-08-16 12:46 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-16  6:04 slow btrfs with a single kworker process using 100% CPU Stefan Priebe - Profihost AG
2017-08-16  6:53 ` Marat Khalili
2017-08-16  8:37   ` Stefan Priebe - Profihost AG
2017-08-16  9:02     ` Konstantin V. Gavrilenko
2017-08-16  9:26       ` Stefan Priebe - Profihost AG
2017-08-16 11:48         ` Konstantin V. Gavrilenko
2017-08-16 12:00           ` Roman Mamedov
2017-08-16 12:29             ` Konstantin V. Gavrilenko
2017-08-16 12:38               ` Stefan Priebe - Profihost AG
2017-08-16 12:46               ` Stefan Priebe - Profihost AG [this message]
2017-08-16 13:04               ` Stefan Priebe - Profihost AG
2017-08-17  5:47               ` Stefan Priebe - Profihost AG
2017-08-17  7:43                 ` Stefan Priebe - Profihost AG
2017-08-20 11:00                   ` Stefan Priebe - Profihost AG
2017-08-20 12:34                     ` Marat Khalili
2017-08-28 18:09                     ` Stefan Priebe - Profihost AG
2017-08-16 23:21       ` Peter Grandi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c750108a-f769-9061-93fa-ce0843989495@profihost.ag \
    --to=s.priebe@profihost.ag \
    --cc=k.gavrilenko@arhont.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=mkh@rqc.ru \
    --cc=pg@btrfs.list.sabi.co.uk \
    --cc=rm@romanrm.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.