From: Waiman Long <longman@redhat.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "ying.huang@intel.com" <ying.huang@intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Will Deacon <will@kernel.org>, Aaron Lu <aaron.lu@intel.com>,
Mel Gorman <mgorman@techsingularity.net>,
kernel test robot <oliver.sang@intel.com>,
Vlastimil Babka <vbabka@suse.cz>,
Dave Hansen <dave.hansen@linux.intel.com>,
Jesper Dangaard Brouer <brouer@redhat.com>,
Michal Hocko <mhocko@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
LKML <linux-kernel@vger.kernel.org>,
lkp@lists.01.org, kernel test robot <lkp@intel.com>,
Feng Tang <feng.tang@intel.com>,
Zhengjun Xing <zhengjun.xing@linux.intel.com>,
fengwei.yin@intel.com
Subject: Re: [mm/page_alloc] f26b3fa046: netperf.Throughput_Mbps -18.0% regression
Date: Tue, 10 May 2022 15:46:46 -0400 [thread overview]
Message-ID: <149d85f2-8561-cfac-3447-425d6a4b8014@redhat.com> (raw)
In-Reply-To: <CAHk-=wiEtdHOeBti66NpSZDQw0KxcU45UNHaO-+Zwbiq3JEu+g@mail.gmail.com>
On 5/10/22 15:03, Linus Torvalds wrote:
> On Tue, May 10, 2022 at 11:47 AM Waiman Long <longman@redhat.com> wrote:>
>> Qspinlock still has one head waiter spinning on the lock. This is much
>> better than the original ticket spinlock where there will be n waiters
>> spinning on the lock.
> Oh, absolutely. I'm not saying we should look at going back. I'm more
> asking whether maybe we could go even further..
We can probably go a bit further, but there will be cost associated with it.
>> That is the cost of a cheap unlock. There is no way to eliminate all
>> lock spinning unless we use MCS lock directly which will require a
>> change in locking API as well as more expensive unlock.
> So there's no question that unlock would be more expensive for the
> contention case, since it would have to always not only clear the lock
> itself, as well as update the noce it points to.
>
> But does it actually require a change in the locking API?
Of course, it is not always necessary to change the locking API.
>
> The qspinlock slowpath already always allocates that mcs node (for
> some definition of "always" - I am obviously ignoring all the trylock
> cases both before and in the slowpath)
>
> But yes, clearly the simply store-release of the current
> queued_spin_unlock() wouldn't work as-is, and maybe the cost of
> replacing it with something else is much more expensive than any
> possible win.
At the minimum, we need to do a read to determine if the lock is
contended and also a write to clear the lock bit. An atomic write after
read is expensive. There is also a possibility of race with non-atomic
read-write. Moreover, the tail stored in the lock can tell you the queue
tail, not its current head. So there is no easy way to notify the
current head waiter to acquire the lock. The only thing I can think of
is to do some kind of proportional backoff for the head waiter by slowly
increasing the time gap with successive spinning attempts on the lock
over time with the cost of an increased lock acquisition latency.
> I think the PV case already basically does that - replacing the the
> "store release" with a much more complex sequence. No?
That is true for pvqspinlock as the unlock is an atomic operation with
increased cost. The original pv ticket spinlock that is now replaced by
pvqspinlock has similar expensive unlock due to the nature of reading
the lock to find out if the head waiter vCPU may have been preempted.
Cheers,
Longman
next prev parent reply other threads:[~2022-05-10 19:47 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-20 1:35 [mm/page_alloc] f26b3fa046: netperf.Throughput_Mbps -18.0% regression kernel test robot
2022-04-29 11:29 ` Aaron Lu
2022-04-29 13:39 ` Mel Gorman
2022-05-05 8:27 ` Aaron Lu
2022-05-05 11:09 ` Mel Gorman
2022-05-05 14:29 ` Aaron Lu
2022-05-06 8:40 ` ying.huang
2022-05-06 12:17 ` Aaron Lu
2022-05-07 0:54 ` ying.huang
2022-05-07 3:27 ` Aaron Lu
2022-05-07 7:11 ` ying.huang
2022-05-07 7:31 ` Aaron Lu
2022-05-07 7:44 ` ying.huang
2022-05-10 3:43 ` Aaron Lu
2022-05-10 6:23 ` ying.huang
2022-05-10 18:05 ` Linus Torvalds
2022-05-10 18:47 ` Waiman Long
2022-05-10 19:03 ` Linus Torvalds
2022-05-10 19:25 ` Linus Torvalds
2022-05-10 19:46 ` Waiman Long [this message]
2022-05-10 19:27 ` Peter Zijlstra
2022-05-11 1:58 ` ying.huang
2022-05-11 2:06 ` Waiman Long
2022-05-11 11:04 ` Aaron Lu
2022-05-12 3:17 ` ying.huang
2022-05-12 12:45 ` Aaron Lu
2022-05-12 17:42 ` Linus Torvalds
2022-05-12 18:06 ` Andrew Morton
2022-05-12 18:49 ` Linus Torvalds
2022-06-14 2:09 ` Feng Tang
2022-05-13 6:19 ` ying.huang
2022-05-11 3:40 ` Aaron Lu
2022-05-11 7:32 ` ying.huang
2022-05-11 7:53 ` Aaron Lu
2022-06-01 2:19 ` Aaron Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=149d85f2-8561-cfac-3447-425d6a4b8014@redhat.com \
--to=longman@redhat.com \
--cc=aaron.lu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=dave.hansen@linux.intel.com \
--cc=feng.tang@intel.com \
--cc=fengwei.yin@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@intel.com \
--cc=lkp@lists.01.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=oliver.sang@intel.com \
--cc=peterz@infradead.org \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=ying.huang@intel.com \
--cc=zhengjun.xing@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).