All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
	"Theodore Ts'o" <tytso@mit.edu>,
	Thomas Gleixner <tglx@linutronix.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Herbert Xu <herbert@gondor.apana.org.au>
Subject: Re: [PATCH 5/5] random: Defer processing of randomness on PREEMPT_RT.
Date: Sun, 30 Jan 2022 23:55:09 +0100	[thread overview]
Message-ID: <CAHmME9oXSx4JS0ZJeZTb7VC3gXoackuH389V9FDknHf_-rDJyA@mail.gmail.com> (raw)
In-Reply-To: <Yd8Ujw4t8DKYuhZK@linutronix.de>

Hey Sebastian,

I spent the weekend thinking about this some more. I'm actually
warming up a bit to the general approach of the original solution
here, though still have questions. To summarize my understanding of
where we are:

Alternative solution we've been discussing:
- Replace spinlock_t with raw spinlocks.
- Ratelimit userspace-triggered latency inducing ioctls with
ratelimit() and an additional mutex of sorts.
- Result: pretty much the same structure we have now, but with some
added protection for PREEMPT_RT.

Your original solution:
- Absorb into the fast pool during the actual IRQ, but never dump it
into the main pool (nor fast load into the crng directly if
crng_init==0) from the hard irq.
- Instead, have irq_thread() check to see if the calling CPU's fast
pool is >= 64, and if so, dump it into the main pool (or fast load
into the crng directly if crng_init==0).

I have two questions about the implications of your original solution:

1) How often does irq_thread() run? With what we have now, we dump the
fast pool into the main pool at exactly 64 events. With what you're
proposing, we're now in >= 64 territory. How do we conceptualize how
far beyond 64 it's likely to grow before irq_thread() does something?
Is it easy to make guarantees like, "at most, probably around 17"? Or
is it potentially unbounded? Growing beyond isn't actually necessarily
a bad thing, but it could potentially *slow* the collection of
entropy. That probably matters more in the crng_init==0 mode, where
we're just desperate to get whatever we can as fast as we can. But
depending on how large that is, it could matter for the main case too.
Having some handle on the latency added here would be helpful for
thinking about this.

2) If we went with this solution, I think I'd prefer to actually do it
unconditionally, for PREEMPT_RT=y and PREEMPT_RT=n. It's easier to
track how this thing works if the state machine always works in one
way instead of two. It also makes thinking about performance margins
for the various components easier if there's only one way used. Do you
see any downsides in doing this unconditionally?

Regards,
Jason

  reply	other threads:[~2022-01-30 22:55 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-07 12:17 [PATCH 0/5] random: Add PREEMPT_RT support Sebastian Andrzej Siewior
2021-12-07 12:17 ` [PATCH 1/5] random: Remove unused irq_flags argument from add_interrupt_randomness() Sebastian Andrzej Siewior
2021-12-07 12:44   ` Wei Liu
2021-12-07 17:35   ` Jason A. Donenfeld
2021-12-07 12:17 ` [PATCH 2/5] irq: Remove unsued flags argument from __handle_irq_event_percpu() Sebastian Andrzej Siewior
2021-12-07 17:41   ` Jason A. Donenfeld
2021-12-11 22:39     ` Thomas Gleixner
2021-12-12 21:13       ` Jason A. Donenfeld
2021-12-07 12:17 ` [PATCH 3/5] random: Split add_interrupt_randomness() Sebastian Andrzej Siewior
2021-12-07 12:17 ` [PATCH 4/5] random: Move the fast_pool reset into the caller Sebastian Andrzej Siewior
2021-12-07 12:17 ` [PATCH 5/5] random: Defer processing of randomness on PREEMPT_RT Sebastian Andrzej Siewior
2021-12-07 18:14   ` Jason A. Donenfeld
2021-12-07 20:10     ` Sebastian Andrzej Siewior
2021-12-13 15:16       ` Sebastian Andrzej Siewior
2021-12-17  2:23       ` Herbert Xu
2021-12-17  9:00         ` Sebastian Andrzej Siewior
2021-12-18  3:24           ` Herbert Xu
2021-12-19  9:48             ` Sebastian Andrzej Siewior
2021-12-20  2:56               ` Herbert Xu
2021-12-20 14:38       ` Jason A. Donenfeld
2022-01-12 17:49         ` Sebastian Andrzej Siewior
2022-01-30 22:55           ` Jason A. Donenfeld [this message]
2022-01-31 16:33             ` Sebastian Andrzej Siewior
2022-02-04 15:31               ` [PATCH RFC v1] random: do not take spinlocks in irq handler Jason A. Donenfeld
2022-02-04 15:58                 ` Jason A. Donenfeld
2022-02-04 20:53                   ` Sebastian Andrzej Siewior
2022-02-04 20:47                 ` Sebastian Andrzej Siewior
2022-02-04 22:08                   ` Jason A. Donenfeld
2022-02-04 23:09                     ` [PATCH v2] random: defer fast pool mixing to worker Jason A. Donenfeld
2022-02-05  4:02                   ` [PATCH RFC v1] random: do not take spinlocks in irq handler Sultan Alsawaf
2022-02-05 12:50                     ` Jason A. Donenfeld

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHmME9oXSx4JS0ZJeZTb7VC3gXoackuH389V9FDknHf_-rDJyA@mail.gmail.com \
    --to=jason@zx2c4.com \
    --cc=bigeasy@linutronix.de \
    --cc=herbert@gondor.apana.org.au \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.