All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] random: Optimize add_interrupt_randomness
@ 2018-02-28 21:43 Andi Kleen
  2018-02-28 23:02 ` Theodore Ts'o
  0 siblings, 1 reply; 2+ messages in thread
From: Andi Kleen @ 2018-02-28 21:43 UTC (permalink / raw)
  To: tytso; +Cc: linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

add_interrupt_randomess always wakes up
code blocking on /dev/random. This wake up is done
unconditionally. Unfortunately this means all interrupts
take the wait queue spinlock, which can be rather expensive
on large systems processing lots of interrupts.

We saw 1% cpu time spinning on this on a large macro workload
running on a large system.

I believe it's a recent regression (?)

Always check if there is a waiter on the wait queue
before waking up. This check can be done without
taking a spinlock.

1.06%         10460  [kernel.vmlinux] [k] native_queued_spin_lock_slowpath
         |
         ---native_queued_spin_lock_slowpath
            |
             --0.57%--_raw_spin_lock_irqsave
                       |
                        --0.56%--__wake_up_common_lock
                                  credit_entropy_bits
                                  add_interrupt_randomness
                                  handle_irq_event_percpu
                                  handle_irq_event
                                  handle_edge_irq
                                  handle_irq
                                  do_IRQ
                                  common_interrupt

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 drivers/char/random.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index e5b3d3ba4660..64a897a2888f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -709,7 +709,8 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 		}
 
 		/* should we wake readers? */
-		if (entropy_bits >= random_read_wakeup_bits) {
+		if (entropy_bits >= random_read_wakeup_bits &&
+		    wq_has_sleeper(&random_read_wait)) {
 			wake_up_interruptible(&random_read_wait);
 			kill_fasync(&fasync, SIGIO, POLL_IN);
 		}
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] random: Optimize add_interrupt_randomness
  2018-02-28 21:43 [PATCH] random: Optimize add_interrupt_randomness Andi Kleen
@ 2018-02-28 23:02 ` Theodore Ts'o
  0 siblings, 0 replies; 2+ messages in thread
From: Theodore Ts'o @ 2018-02-28 23:02 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-kernel, Andi Kleen

Thanks, applied.

					- Ted

On Wed, Feb 28, 2018 at 01:43:28PM -0800, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
> 
> add_interrupt_randomess always wakes up
> code blocking on /dev/random. This wake up is done
> unconditionally. Unfortunately this means all interrupts
> take the wait queue spinlock, which can be rather expensive
> on large systems processing lots of interrupts.
> 
> We saw 1% cpu time spinning on this on a large macro workload
> running on a large system.
> 
> I believe it's a recent regression (?)
> 
> Always check if there is a waiter on the wait queue
> before waking up. This check can be done without
> taking a spinlock.
> 
> 1.06%         10460  [kernel.vmlinux] [k] native_queued_spin_lock_slowpath
>          |
>          ---native_queued_spin_lock_slowpath
>             |
>              --0.57%--_raw_spin_lock_irqsave
>                        |
>                         --0.56%--__wake_up_common_lock
>                                   credit_entropy_bits
>                                   add_interrupt_randomness
>                                   handle_irq_event_percpu
>                                   handle_irq_event
>                                   handle_edge_irq
>                                   handle_irq
>                                   do_IRQ
>                                   common_interrupt
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-02-28 23:02 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-28 21:43 [PATCH] random: Optimize add_interrupt_randomness Andi Kleen
2018-02-28 23:02 ` Theodore Ts'o

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.