archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <>
To: "Jason A. Donenfeld" <>
Cc: Eric Biggers <>, Theodore Ts'o <>,
	LKML <>,
	Linux Crypto Mailing List <>,
	stable <>
Subject: Re: [PATCH RESEND] random: use correct memory barriers for crng_node_pool
Date: Mon, 20 Dec 2021 10:11:15 -0800	[thread overview]
Message-ID: <20211220181115.GZ641268@paulmck-ThinkPad-P17-Gen-1> (raw)
In-Reply-To: <>

On Mon, Dec 20, 2021 at 04:07:28PM +0100, Jason A. Donenfeld wrote:
> Hi Eric,
> This patch seems fine to me, and I'll apply it in a few days after
> sitting on the list for comments, but:
> > Note: READ_ONCE() could be used instead of smp_load_acquire(), but it is
> > harder to verify that it is correct, so I'd prefer not to use it here.
> > (,
> > and though it's a correct fix, it was derailed by a debate about whether
> > it's safe to use READ_ONCE() instead of smp_load_acquire() or not.
> But holy smokes... I chuckled at your, "please explain in English." :)
> Paul - if you'd like to look at this patch and confirm that this
> specific patch and usage is fine to be changed into READ_ONCE()
> instead of smp_load_acquire(), please pipe up here. And I really do
> mean this specific patch and usage, not to be confused with any other
> usage elsewhere in the kernel or question about general things, which
> doubtlessly involve larger discussions like the one Eric linked to
> above. If you're certain this patch here is READ_ONCE()able, I'd
> appreciate your saying so with a simple, "it is safe; go for it",
> since I'd definitely like the optimization if it's safe. If I don't
> hear from you, I'll apply this as-is from Eric, as I'd rather be safe
> than sorry.

First I would want to see some evidence that READ_ONCE() was really
providing measurable performance benefit.  Such evidence would be
easiest to obtain by running on a weakly ordered system such as ARM,
ARMv8, or PowerPC.

If this does provide a measurable benefit, why not the following?

static inline struct crng_state *select_crng(void)
	struct crng_state **pool;
	struct crng_state *pooln;
	int nid = numa_node_id();

	/* pairs with cmpxchg_release() in do_numa_crng_init() */
	pool = rcu_dereference(&crng_node_pool);
	if (pool) {
		pooln = rcu_dereference(pool[nid]);
		if (pooln)
			return pooln;

	return &primary_crng;

This is in ignorance of the kfree() side of this code.  So another
question is "Suppose that there was a long delay (vCPU preemption, for
example) just before the 'return pooln'.  What prevents a use-after-free

Of course, this question applies equally to the smp_load_acquire()

							Thanx, Paul

  reply	other threads:[~2021-12-20 18:11 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-19  2:51 [PATCH RESEND] random: use correct memory barriers for crng_node_pool Eric Biggers
2021-12-20 15:07 ` Jason A. Donenfeld
2021-12-20 18:11   ` Paul E. McKenney [this message]
2021-12-20 18:16     ` Jason A. Donenfeld
2021-12-20 18:31       ` Paul E. McKenney
2021-12-20 18:35         ` Eric Biggers
2021-12-20 19:00           ` Paul E. McKenney
2021-12-20 21:45             ` Jason A. Donenfeld
2021-12-20 22:10               ` Eric Biggers
2021-12-20 15:17 ` Jason A. Donenfeld
2021-12-20 15:38   ` Eric Biggers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211220181115.GZ641268@paulmck-ThinkPad-P17-Gen-1 \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).