linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <dchinner@redhat.com>
To: Hillf Danton <hdanton@sina.com>
Cc: kernel test robot <oliver.sang@intel.com>,
	"Darrick J. Wong" <djwong@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	lkp@intel.com, zhengjun.xing@linux.intel.com
Subject: Re: [xfs]  6df693ed7b:  aim7.jobs-per-min -15.7% regression
Date: Sun, 15 Aug 2021 09:40:35 +1000	[thread overview]
Message-ID: <20210814234035.GE2959@rh> (raw)
In-Reply-To: <20210809093114.3179-1-hdanton@sina.com>

On Mon, Aug 09, 2021 at 05:31:14PM +0800, Hillf Danton wrote:
> On Mon, 9 Aug 2021 14:42:48 +0800
> > 
> > FYI, we noticed a -15.7% regression of aim7.jobs-per-min due to commit:
> > 
> > 
> > commit: 6df693ed7ba9ec03cafc38d5064de376a11243e2 ("xfs: per-cpu deferred inode inactivation queues")
> > https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git xfs-5.15-merge
> > 
> > 
> > in testcase: aim7
> > on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
> > with following parameters:
> > 
> > 	disk: 4BRD_12G
> > 	md: RAID1
> > 	fs: xfs
> > 	test: disk_wrt
> > 	load: 3000
> > 	cpufreq_governor: performance
> > 	ucode: 0x5003006
> > 
> 
> See if scheduling can help assuming a bound worker should run as short as
> it could.
> 
> The change below is
> 1/ add schedule entry in inodegc worker, and as compensation allow it to
> repeat gc until no more c available.

Do you have any evidence that this is a problem?

I mean, we bound queue depth to 256 items, and my direct
measurements of workloads show that and typical inactivation
processing does not block and takes roughly 50-100us per item. On
inodes that require lots of work (maybe minutes!), we end up
sleeping on locks or resource reservations fairly quickly, hence we
don't tend to rack up significant amount of uninterrupted CPU time
in this loop at all.

> 2/ make inodegc_wq unbound to spawn workers because they are no longer
> potential CPU hogs (and this is not mandatory but optional).
> 
> to see if hot cache outweights spawning of workers.

NACK. We already know what impact that has:  moving to bound
workqueues erased a 50-60% performance degradation in the original
queueing mechanism that used unbound workqueues and required
inactivation to run on cold caches. IOWs, performance analysis lead
us to short bound depth per-cpu queues and single depth bound
per-cpu workqueues. We don't do complex stuff like this unless it is
necessary for performance and scalability...

Cheers,

Dave.
-- 
Dave Chinner
dchinner@redhat.com


      parent reply	other threads:[~2021-08-14 23:40 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-09  6:42 [xfs] 6df693ed7b: aim7.jobs-per-min -15.7% regression kernel test robot
2021-08-14 23:28 ` Dave Chinner
2021-08-18  8:52   ` Oliver Sang
     [not found] ` <20210809093114.3179-1-hdanton@sina.com>
2021-08-14 23:40   ` Dave Chinner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210814234035.GE2959@rh \
    --to=dchinner@redhat.com \
    --cc=djwong@kernel.org \
    --cc=hdanton@sina.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=oliver.sang@intel.com \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).