All of lore.kernel.org
 help / color / mirror / Atom feed
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Mateusz Guzik <mjguzik@gmail.com>,
	linux-arch <linux-arch@vger.kernel.org>,
	 Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	 Michael Ellerman <mpe@ellerman.id.au>
Cc: tony.luck@intel.com, viro@zeniv.linux.org.uk,
	linux-fsdevel@vger.kernel.org,
	Jan Glauber <jan.glauber@gmail.com>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>
Subject: Re: lockref scalability on x86-64 vs cpu_relax
Date: Thu, 12 Jan 2023 18:13:16 -0600	[thread overview]
Message-ID: <CAHk-=wjthxgrLEvgZBUwd35e_mk=dCWKMUEURC6YsX5nWom8kQ@mail.gmail.com> (raw)
In-Reply-To: <CAGudoHHx0Nqg6DE70zAVA75eV-HXfWyhVMWZ-aSeOofkA_=WdA@mail.gmail.com>

[ Adding linux-arch, which is relevant but not very specific, and the
arm64 and powerpc maintainers that are the more specific cases for an
architecture where this might actually matter.

  See

        https://lore.kernel.org/all/CAGudoHHx0Nqg6DE70zAVA75eV-HXfWyhVMWZ-aSeOofkA_=WdA@mail.gmail.com/

  for original full email, but it might be sufficiently clear just
from this heavily cut-down context too ]

Side note on your access() changes - if it turns out that you can
remove all the cred games, we should possibly then revert my old
commit d7852fbd0f04 ("access: avoid the RCU grace period for the
temporary subjective credentials") which avoided the biggest issue
with the unnecessary cred switching.

I *think* access() is the only user of that special 'non_rcu' thing,
but it is possible that the whole 'non_rcu' thing ends up mattering
for cases where the cred actually does change because euid != uid (ie
suid programs), so this would need a bit more effort to do performance
testing on.

On Thu, Jan 12, 2023 at 5:36 PM Mateusz Guzik <mjguzik@gmail.com> wrote:
>
> To my understanding on said architecture failed cmpxchg still grants you
> exclusive access to the cacheline, making immediate retry preferable
> when trying to inc/dec unless a certain value is found.

I actually suspect that is _always_ the case - this is not like a
contended spinlock where we want to pause because we're waiting for
the value to change and become unlocked, this cmpxchg loop is likely
always better off just retrying with the new value.

That said, the "likely always better off" is purely about performance.

So I have this suspicion that the reason Tony added the cpu_relax()
was simply not about performance, but about other issues, like
fairness in SMT situations.

That said, evern from a fairness perspective the cpu_relax() sounds a
bit odd and unlikely - we're literally yielding when we lost a race,
so it hurts the _loser_, not the winner, and thus might make fairness
worse too.

I dunno.  Tony may have some memory of what the issue was.

> ... without numbers attached to it. Given the above linked thread it
> looks like the arch this was targeting was itanium, not x86-64, but
> the change landed for everyone.

Yeah, if it was ia64-only, it's a non-issue these days. It's dead and
in pure maintenance mode from a kernel perspective (if even that).

> Later it was further augmented with:
> commit 893a7d32e8e04ca4d6c882336b26ed660ca0a48d
> Author: Jan Glauber <jan.glauber@gmail.com>
> Date:   Wed Jun 5 15:48:49 2019 +0200
>
>     lockref: Limit number of cmpxchg loop retries
> [snip]
>     With the retry limit the performance of an open-close testcase
>     improved between 60-70% on ThunderX2.
>
> While the benchmark was specifically on ThunderX2, the change once more
> was made for all archs.

Actually, in that case I did ask for the test to be run on x86
hardware too, and exactly like you found:

> I should note in my tests the retry limit was never reached fwiw.

the max loop retry number just isn't an issue. It fundamentally only
affects extremely unfair platforms, so it's arguably always the right
thing to do.

So it may be "ThunderX2 specific" in that that is where it was
noticed, but I think we can safely just consider the max loop thing to
be a generic safety net that hopefully simply never triggers in
practice on any sane platform.

> All that said, I think the thing to do here is to replace cpu_relax
> with a dedicated arch-dependent macro, akin to the following:

I would actually prefer just removing it entirely and see if somebody
else hollers. You have the numbers to prove it hurts on real hardware,
and I don't think we have any numbers to the contrary.

So I think it's better to trust the numbers and remove it as a
failure, than say "let's just remove it on x86-64 and leave everybody
else with the potentially broken code"

Because I do think that a cmpxchg loop that updates the value it
compares and exchanges is fundamentally different from a "busy-loop,
trying to read while locked", and with your numbers as ammunition, I
think it's better to just remove that cpu_relax() entirely.

Then other architectures can try to run their numbers, and only *if*
it then turns out that they have a reason to do something else should
we make this conditional and different on different architectures.

Let's try to keep the code as common as possibly until we have hard
evidence for special cases, in other words.

                 Linus

WARNING: multiple messages have this Message-ID (diff)
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Mateusz Guzik <mjguzik@gmail.com>,
	linux-arch <linux-arch@vger.kernel.org>,
	 Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	 Michael Ellerman <mpe@ellerman.id.au>
Cc: linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk,
	 Jan Glauber <jan.glauber@gmail.com>,
	tony.luck@intel.com,
	 Linux ARM <linux-arm-kernel@lists.infradead.org>,
	 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Subject: Re: lockref scalability on x86-64 vs cpu_relax
Date: Thu, 12 Jan 2023 18:13:16 -0600	[thread overview]
Message-ID: <CAHk-=wjthxgrLEvgZBUwd35e_mk=dCWKMUEURC6YsX5nWom8kQ@mail.gmail.com> (raw)
In-Reply-To: <CAGudoHHx0Nqg6DE70zAVA75eV-HXfWyhVMWZ-aSeOofkA_=WdA@mail.gmail.com>

[ Adding linux-arch, which is relevant but not very specific, and the
arm64 and powerpc maintainers that are the more specific cases for an
architecture where this might actually matter.

  See

        https://lore.kernel.org/all/CAGudoHHx0Nqg6DE70zAVA75eV-HXfWyhVMWZ-aSeOofkA_=WdA@mail.gmail.com/

  for original full email, but it might be sufficiently clear just
from this heavily cut-down context too ]

Side note on your access() changes - if it turns out that you can
remove all the cred games, we should possibly then revert my old
commit d7852fbd0f04 ("access: avoid the RCU grace period for the
temporary subjective credentials") which avoided the biggest issue
with the unnecessary cred switching.

I *think* access() is the only user of that special 'non_rcu' thing,
but it is possible that the whole 'non_rcu' thing ends up mattering
for cases where the cred actually does change because euid != uid (ie
suid programs), so this would need a bit more effort to do performance
testing on.

On Thu, Jan 12, 2023 at 5:36 PM Mateusz Guzik <mjguzik@gmail.com> wrote:
>
> To my understanding on said architecture failed cmpxchg still grants you
> exclusive access to the cacheline, making immediate retry preferable
> when trying to inc/dec unless a certain value is found.

I actually suspect that is _always_ the case - this is not like a
contended spinlock where we want to pause because we're waiting for
the value to change and become unlocked, this cmpxchg loop is likely
always better off just retrying with the new value.

That said, the "likely always better off" is purely about performance.

So I have this suspicion that the reason Tony added the cpu_relax()
was simply not about performance, but about other issues, like
fairness in SMT situations.

That said, evern from a fairness perspective the cpu_relax() sounds a
bit odd and unlikely - we're literally yielding when we lost a race,
so it hurts the _loser_, not the winner, and thus might make fairness
worse too.

I dunno.  Tony may have some memory of what the issue was.

> ... without numbers attached to it. Given the above linked thread it
> looks like the arch this was targeting was itanium, not x86-64, but
> the change landed for everyone.

Yeah, if it was ia64-only, it's a non-issue these days. It's dead and
in pure maintenance mode from a kernel perspective (if even that).

> Later it was further augmented with:
> commit 893a7d32e8e04ca4d6c882336b26ed660ca0a48d
> Author: Jan Glauber <jan.glauber@gmail.com>
> Date:   Wed Jun 5 15:48:49 2019 +0200
>
>     lockref: Limit number of cmpxchg loop retries
> [snip]
>     With the retry limit the performance of an open-close testcase
>     improved between 60-70% on ThunderX2.
>
> While the benchmark was specifically on ThunderX2, the change once more
> was made for all archs.

Actually, in that case I did ask for the test to be run on x86
hardware too, and exactly like you found:

> I should note in my tests the retry limit was never reached fwiw.

the max loop retry number just isn't an issue. It fundamentally only
affects extremely unfair platforms, so it's arguably always the right
thing to do.

So it may be "ThunderX2 specific" in that that is where it was
noticed, but I think we can safely just consider the max loop thing to
be a generic safety net that hopefully simply never triggers in
practice on any sane platform.

> All that said, I think the thing to do here is to replace cpu_relax
> with a dedicated arch-dependent macro, akin to the following:

I would actually prefer just removing it entirely and see if somebody
else hollers. You have the numbers to prove it hurts on real hardware,
and I don't think we have any numbers to the contrary.

So I think it's better to trust the numbers and remove it as a
failure, than say "let's just remove it on x86-64 and leave everybody
else with the potentially broken code"

Because I do think that a cmpxchg loop that updates the value it
compares and exchanges is fundamentally different from a "busy-loop,
trying to read while locked", and with your numbers as ammunition, I
think it's better to just remove that cpu_relax() entirely.

Then other architectures can try to run their numbers, and only *if*
it then turns out that they have a reason to do something else should
we make this conditional and different on different architectures.

Let's try to keep the code as common as possibly until we have hard
evidence for special cases, in other words.

                 Linus

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Mateusz Guzik <mjguzik@gmail.com>,
	linux-arch <linux-arch@vger.kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Michael Ellerman <mpe@ellerman.id.au>
Cc: linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk,
	Jan Glauber <jan.glauber@gmail.com>,
	tony.luck@intel.com,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Subject: Re: lockref scalability on x86-64 vs cpu_relax
Date: Thu, 12 Jan 2023 18:13:16 -0600	[thread overview]
Message-ID: <CAHk-=wjthxgrLEvgZBUwd35e_mk=dCWKMUEURC6YsX5nWom8kQ@mail.gmail.com> (raw)
In-Reply-To: <CAGudoHHx0Nqg6DE70zAVA75eV-HXfWyhVMWZ-aSeOofkA_=WdA@mail.gmail.com>

[ Adding linux-arch, which is relevant but not very specific, and the
arm64 and powerpc maintainers that are the more specific cases for an
architecture where this might actually matter.

  See

        https://lore.kernel.org/all/CAGudoHHx0Nqg6DE70zAVA75eV-HXfWyhVMWZ-aSeOofkA_=WdA@mail.gmail.com/

  for original full email, but it might be sufficiently clear just
from this heavily cut-down context too ]

Side note on your access() changes - if it turns out that you can
remove all the cred games, we should possibly then revert my old
commit d7852fbd0f04 ("access: avoid the RCU grace period for the
temporary subjective credentials") which avoided the biggest issue
with the unnecessary cred switching.

I *think* access() is the only user of that special 'non_rcu' thing,
but it is possible that the whole 'non_rcu' thing ends up mattering
for cases where the cred actually does change because euid != uid (ie
suid programs), so this would need a bit more effort to do performance
testing on.

On Thu, Jan 12, 2023 at 5:36 PM Mateusz Guzik <mjguzik@gmail.com> wrote:
>
> To my understanding on said architecture failed cmpxchg still grants you
> exclusive access to the cacheline, making immediate retry preferable
> when trying to inc/dec unless a certain value is found.

I actually suspect that is _always_ the case - this is not like a
contended spinlock where we want to pause because we're waiting for
the value to change and become unlocked, this cmpxchg loop is likely
always better off just retrying with the new value.

That said, the "likely always better off" is purely about performance.

So I have this suspicion that the reason Tony added the cpu_relax()
was simply not about performance, but about other issues, like
fairness in SMT situations.

That said, evern from a fairness perspective the cpu_relax() sounds a
bit odd and unlikely - we're literally yielding when we lost a race,
so it hurts the _loser_, not the winner, and thus might make fairness
worse too.

I dunno.  Tony may have some memory of what the issue was.

> ... without numbers attached to it. Given the above linked thread it
> looks like the arch this was targeting was itanium, not x86-64, but
> the change landed for everyone.

Yeah, if it was ia64-only, it's a non-issue these days. It's dead and
in pure maintenance mode from a kernel perspective (if even that).

> Later it was further augmented with:
> commit 893a7d32e8e04ca4d6c882336b26ed660ca0a48d
> Author: Jan Glauber <jan.glauber@gmail.com>
> Date:   Wed Jun 5 15:48:49 2019 +0200
>
>     lockref: Limit number of cmpxchg loop retries
> [snip]
>     With the retry limit the performance of an open-close testcase
>     improved between 60-70% on ThunderX2.
>
> While the benchmark was specifically on ThunderX2, the change once more
> was made for all archs.

Actually, in that case I did ask for the test to be run on x86
hardware too, and exactly like you found:

> I should note in my tests the retry limit was never reached fwiw.

the max loop retry number just isn't an issue. It fundamentally only
affects extremely unfair platforms, so it's arguably always the right
thing to do.

So it may be "ThunderX2 specific" in that that is where it was
noticed, but I think we can safely just consider the max loop thing to
be a generic safety net that hopefully simply never triggers in
practice on any sane platform.

> All that said, I think the thing to do here is to replace cpu_relax
> with a dedicated arch-dependent macro, akin to the following:

I would actually prefer just removing it entirely and see if somebody
else hollers. You have the numbers to prove it hurts on real hardware,
and I don't think we have any numbers to the contrary.

So I think it's better to trust the numbers and remove it as a
failure, than say "let's just remove it on x86-64 and leave everybody
else with the potentially broken code"

Because I do think that a cmpxchg loop that updates the value it
compares and exchanges is fundamentally different from a "busy-loop,
trying to read while locked", and with your numbers as ammunition, I
think it's better to just remove that cpu_relax() entirely.

Then other architectures can try to run their numbers, and only *if*
it then turns out that they have a reason to do something else should
we make this conditional and different on different architectures.

Let's try to keep the code as common as possibly until we have hard
evidence for special cases, in other words.

                 Linus

  reply	other threads:[~2023-01-13  0:14 UTC|newest]

Thread overview: 108+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-12 23:36 lockref scalability on x86-64 vs cpu_relax Mateusz Guzik
2023-01-13  0:13 ` Linus Torvalds [this message]
2023-01-13  0:13   ` Linus Torvalds
2023-01-13  0:13   ` Linus Torvalds
2023-01-13  0:30   ` Luck, Tony
2023-01-13  0:30     ` Luck, Tony
2023-01-13  0:30     ` Luck, Tony
2023-01-13  0:45     ` Linus Torvalds
2023-01-13  0:45       ` Linus Torvalds
2023-01-13  0:45       ` Linus Torvalds
2023-01-13  7:55     ` ia64 removal (was: Re: lockref scalability on x86-64 vs cpu_relax) Ard Biesheuvel
2023-01-13  7:55       ` Ard Biesheuvel
2023-01-13  7:55       ` Ard Biesheuvel
2023-01-13 16:17       ` Luck, Tony
2023-01-13 16:17         ` Luck, Tony
2023-01-13 16:17         ` Luck, Tony
2023-01-13 20:49       ` Jessica Clarke
2023-01-13 20:49         ` Jessica Clarke
2023-01-13 20:49         ` Jessica Clarke
2023-01-13 21:03         ` Luck, Tony
2023-01-13 21:03           ` Luck, Tony
2023-01-13 21:03           ` Luck, Tony
2023-01-13 21:04           ` Jessica Clarke
2023-01-13 21:04             ` Jessica Clarke
2023-01-13 21:04             ` Jessica Clarke
2023-01-13 21:05       ` John Paul Adrian Glaubitz
2023-01-13 21:05         ` John Paul Adrian Glaubitz
2023-01-13 21:05         ` John Paul Adrian Glaubitz
2023-01-13 23:25         ` Ard Biesheuvel
2023-01-13 23:25           ` Ard Biesheuvel
2023-01-13 23:25           ` Ard Biesheuvel
2023-01-14 11:24           ` Sedat Dilek
2023-01-14 11:24             ` Sedat Dilek
2023-01-14 11:24             ` Sedat Dilek
2023-01-14 11:28             ` Sedat Dilek
2023-01-14 11:28               ` Sedat Dilek
2023-01-14 11:28               ` Sedat Dilek
2023-01-15  0:27               ` Matthew Wilcox
2023-01-15  0:27                 ` Matthew Wilcox
2023-01-15  0:27                 ` Matthew Wilcox
2023-01-15 12:04                 ` Sedat Dilek
2023-01-15 12:04                   ` Sedat Dilek
2023-01-15 12:04                   ` Sedat Dilek
2023-01-16  9:42                   ` John Paul Adrian Glaubitz
2023-01-16  9:42                     ` John Paul Adrian Glaubitz
2023-01-16  9:42                     ` John Paul Adrian Glaubitz
2023-01-16  9:41                 ` John Paul Adrian Glaubitz
2023-01-16  9:41                   ` John Paul Adrian Glaubitz
2023-01-16  9:41                   ` John Paul Adrian Glaubitz
2023-01-16 13:28                   ` Matthew Wilcox
2023-01-16 13:28                     ` Matthew Wilcox
2023-01-16 13:28                     ` Matthew Wilcox
2023-01-16  9:40               ` John Paul Adrian Glaubitz
2023-01-16  9:40                 ` John Paul Adrian Glaubitz
2023-01-16  9:40                 ` John Paul Adrian Glaubitz
2023-01-16  9:37             ` John Paul Adrian Glaubitz
2023-01-16  9:37               ` John Paul Adrian Glaubitz
2023-01-16  9:37               ` John Paul Adrian Glaubitz
2023-01-16  9:32           ` John Paul Adrian Glaubitz
2023-01-16  9:32             ` John Paul Adrian Glaubitz
2023-01-16  9:32             ` John Paul Adrian Glaubitz
2023-01-16 10:09             ` Ard Biesheuvel
2023-01-16 10:09               ` Ard Biesheuvel
2023-01-16 10:09               ` Ard Biesheuvel
2023-01-13  1:12   ` lockref scalability on x86-64 vs cpu_relax Mateusz Guzik
2023-01-13  1:12     ` Mateusz Guzik
2023-01-13  1:12     ` Mateusz Guzik
2023-01-13  4:08     ` Linus Torvalds
2023-01-13  4:08       ` Linus Torvalds
2023-01-13  4:08       ` Linus Torvalds
2023-01-13  9:46     ` Will Deacon
2023-01-13  9:46       ` Will Deacon
2023-01-13  9:46       ` Will Deacon
2023-01-13  3:20   ` Nicholas Piggin
2023-01-13  3:20     ` Nicholas Piggin
2023-01-13  3:20     ` Nicholas Piggin
2023-01-13  4:15     ` Linus Torvalds
2023-01-13  4:15       ` Linus Torvalds
2023-01-13  4:15       ` Linus Torvalds
2023-01-13  5:36       ` Nicholas Piggin
2023-01-13  5:36         ` Nicholas Piggin
2023-01-13  5:36         ` Nicholas Piggin
2023-01-16 14:08     ` Memory transaction instructions David Howells
2023-01-16 14:08       ` David Howells
2023-01-16 15:09       ` Matthew Wilcox
2023-01-16 15:09         ` Matthew Wilcox
2023-01-16 15:09         ` Matthew Wilcox
2023-01-16 16:59       ` Linus Torvalds
2023-01-16 16:59         ` Linus Torvalds
2023-01-16 16:59         ` Linus Torvalds
2023-01-18  9:05       ` David Howells
2023-01-18  9:05         ` David Howells
2023-01-18  9:05         ` David Howells
2023-01-19  1:41         ` Nicholas Piggin
2023-01-19  1:41           ` Nicholas Piggin
2023-01-19  1:41           ` Nicholas Piggin
2023-01-13 10:23   ` lockref scalability on x86-64 vs cpu_relax Peter Zijlstra
2023-01-13 10:23     ` Peter Zijlstra
2023-01-13 10:23     ` Peter Zijlstra
2023-01-13 18:44   ` [PATCH] lockref: stop doing cpu_relax in the cmpxchg loop Mateusz Guzik
2023-01-13 18:44     ` Mateusz Guzik
2023-01-13 18:44     ` Mateusz Guzik
2023-01-13 21:47     ` Luck, Tony
2023-01-13 21:47       ` Luck, Tony
2023-01-13 21:47       ` Luck, Tony
2023-01-13 23:31       ` Linus Torvalds
2023-01-13 23:31         ` Linus Torvalds
2023-01-13 23:31         ` Linus Torvalds

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHk-=wjthxgrLEvgZBUwd35e_mk=dCWKMUEURC6YsX5nWom8kQ@mail.gmail.com' \
    --to=torvalds@linux-foundation.org \
    --cc=catalin.marinas@arm.com \
    --cc=jan.glauber@gmail.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mjguzik@gmail.com \
    --cc=mpe@ellerman.id.au \
    --cc=tony.luck@intel.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.