linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker"
       [not found]       ` <BD03BFF6-C369-4D34-A38B-49653F1CBC53@oracle.com>
@ 2022-09-21 22:32         ` Jason A. Donenfeld
  2022-09-21 23:35           ` Jason A. Donenfeld
  2022-09-21 23:54           ` Tejun Heo
  0 siblings, 2 replies; 13+ messages in thread
From: Jason A. Donenfeld @ 2022-09-21 22:32 UTC (permalink / raw)
  To: Sherry Yang, netdev, linux-kernel, linux-rt-users, Tejun Heo,
	Lai Jiangshan, Sebastian Siewior
  Cc: Sebastian Siewior, Jack Vogel, Tariq Toukan

Hi Sherry (and Sebastian and Netdev and Tejun and whomever),

I'm top-replying so that I can provide an overview of what's up to other
readers, and then I'll leave your email below for additional context.

random.c used to have a hard IRQ handler that did something like this:

    do_some_stuff()
    spin_lock()
    do_some_other_stuff()
    spin_lock()

That worked fine, but Sebastian pointed out that having spinlocks in a
hard IRQ handler was a big no-no for RT. Not wanting to make those into
raw spinlocks, he suggested we hoist things into a workqueue. So that's
what we did together, and now that function reads:

    do_some_stuff()
    queue_work_on(raw_smp_processor_id(), other_stuff_worker);

That seemed reasonable to me -- it's a pattern practiced a million times
all over the kernel -- and is currently how random.c's
add_interrupt_randomness() functions.

Sherry, however, has reported a ~10% performance regression using qperf
with TCP over some heavy duty infiniband cards. According to Sherry's
tests, removing the call to queue_work_on() makes the performance
regression go away.

That leads me to suspect that queue_work_on() might actually not be as
cheap as I assumed? If so, is that surprising to anybody else? And what
should we do about this?

Unfortunately, as you'll see from reading below, I'm hopeless in trying
to recreate Sherry's test rig, and even Sherry was unable to reproduce
it on different hardware. Nonetheless, a 10% regression on fancy 40gbps
hardware seems like something worthy of wider concern.

What are our options? Investigate queue_work_on() bottlenecks? Move back
to the original pattern, but use raw spinlocks? Some thing else?

Sherry -- are you able to do a bit of profiling to see which
instructions or which area of a function is the hottest or creating that
bottleneck? I think we probably need more information to do something
with this.

Also, because I still have no idea how I can reproduce this myself, you
might need to take the reigns with helping to develop and test a patch,
since I'm kind of stabbing in the dark here.

Anyway, because this might be rather involved, I figure it's best to
move this conversation on list in case other folks have insights.

Regards,
Jason

On Wed, Sep 21, 2022 at 06:09:27PM +0000, Sherry Yang wrote:
> > On Sep 20, 2022, at 7:44 AM, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> > 
> > Anyway, a few questions:
> > 1) Does the regression disappear if you change this line:
> > - queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
> > + schedule_work_on(raw_smp_processor_id(), &fast_pool->mix);
> 
> After applying this change, we still see performance regression there on linux-stable v5.15
> 
> > 
> > 2) Does the regression disappear if you remove this line:
> > - queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
> > + //queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
> 
> After applying this change, we see performance get recovered on linux-stable v5.15.
> 
> > 
> >> We could see performance regression there.
> > 
> > Can you give me some detailed instructions on how I can reproduce
> > this? Can it be reproduced inside of a single VM using network
> > namespaces, for example? Something like that would greatly help me
> > nail this down. For example, if you can give me a bash script that
> > does everything entirely on a single host?
> We are dong qperf tcp latency test there. All test results above are collected from X7 server with Mellanox Technologies 
> MT27500 Family [ConnectX-3] cards: 
> Infiniband device 'mlx4_0' port 1 status: 
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1 
> base lid: 0x6 
> sm lid: 0x1 
> state: 4: ACTIVE 
> phys state: 5: LinkUp 
> rate: 40 Gb/sec (4X QDR) 
> link_layer: InfiniBand 
> 
> Cards are configured with IP addresses on private subnet for IPoIB 
> performance testing. 
> Regression identified in this bug is in TCP latency in this stack as reported 
> by qperf tcp_lat metric: 
> 
> We have one system listen as a qperf server:
> [root@yourQperfServer ~]# qperf
> 
> Have the other system connect to qperf server as a client (in this case, it’s X7 server with Mellanox card):
> [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat
> 
> However, our test team ran other experiments yesterday.
> * Ran benchmark on X5-2 system over ixgbe interface 
> * Ran 8 processes of the benchmark on the original system over the Mellanox card 
> Both these experiments failed to reproduce the regression. This highlights that the regression is not seen over ethernet network devices 
> and is only seen when running a single instance of the qperf benchmark.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker"
  2022-09-21 22:32         ` 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker" Jason A. Donenfeld
@ 2022-09-21 23:35           ` Jason A. Donenfeld
  2022-09-21 23:54           ` Tejun Heo
  1 sibling, 0 replies; 13+ messages in thread
From: Jason A. Donenfeld @ 2022-09-21 23:35 UTC (permalink / raw)
  To: Sherry Yang, netdev, linux-kernel, linux-rt-users, Tejun Heo,
	Lai Jiangshan, Sebastian Siewior, sultan
  Cc: Jack Vogel, Tariq Toukan

Hey again Sherry,

On Thu, Sep 22, 2022 at 12:32:49AM +0200, Jason A. Donenfeld wrote:
> That leads me to suspect that queue_work_on() might actually not be as
> cheap as I assumed? If so, is that surprising to anybody else? And what
> should we do about this?

Sultan (CC'd) suggested I look at the much less expensive softirq
tasklet for this, which matches the use case pretty much entirely as
well. Can you try out this patch below and see if it resolves the
performance regression?

Thanks,
Jason

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 520a385c7dab..ad17b36cf977 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -918,13 +918,16 @@ EXPORT_SYMBOL_GPL(unregister_random_vmfork_notifier);
 #endif

 struct fast_pool {
-	struct work_struct mix;
+	struct tasklet_struct mix;
 	unsigned long pool[4];
 	unsigned long last;
 	unsigned int count;
 };

+static void mix_interrupt_randomness(struct tasklet_struct *work);
+
 static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
+	.mix = { .use_callback = true, .callback = mix_interrupt_randomness },
 #ifdef CONFIG_64BIT
 #define FASTMIX_PERM SIPHASH_PERMUTATION
 	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
@@ -973,7 +976,7 @@ int __cold random_online_cpu(unsigned int cpu)
 }
 #endif

-static void mix_interrupt_randomness(struct work_struct *work)
+static void mix_interrupt_randomness(struct tasklet_struct *work)
 {
 	struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
 	/*
@@ -1027,10 +1030,8 @@ void add_interrupt_randomness(int irq)
 	if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
 		return;

-	if (unlikely(!fast_pool->mix.func))
-		INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
 	fast_pool->count |= MIX_INFLIGHT;
-	queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+	tasklet_hi_schedule(&fast_pool->mix);
 }
 EXPORT_SYMBOL_GPL(add_interrupt_randomness);


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker"
  2022-09-21 22:32         ` 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker" Jason A. Donenfeld
  2022-09-21 23:35           ` Jason A. Donenfeld
@ 2022-09-21 23:54           ` Tejun Heo
  2022-09-22 16:45             ` Jason A. Donenfeld
  2022-09-28 11:23             ` 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker" Sebastian Siewior
  1 sibling, 2 replies; 13+ messages in thread
From: Tejun Heo @ 2022-09-21 23:54 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Sherry Yang, netdev, linux-kernel, linux-rt-users, Lai Jiangshan,
	Sebastian Siewior, Jack Vogel, Tariq Toukan

Hello,

On Thu, Sep 22, 2022 at 12:32:49AM +0200, Jason A. Donenfeld wrote:
> What are our options? Investigate queue_work_on() bottlenecks? Move back
> to the original pattern, but use raw spinlocks? Some thing else?

I doubt it's queue_work_on() itself if it's called at very high frequency as
the duplicate calls would just fail to claim the PENDING bit and return but
if it's being called at a high frequency, it'd be waking up a kthread over
and over again, which can get pretty expensive. Maybe that ends competing
with softirqd which is handling net rx or sth?

So, yeah, I'd try something which doesn't always involve scheduling and a
context switch whether that's softirq, tasklet, or irq work. I probably am
mistaken but I thought RT kernel pushes irq handling to threads so that
these things can be handled sanely. Is this some special case?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker"
  2022-09-21 23:54           ` Tejun Heo
@ 2022-09-22 16:45             ` Jason A. Donenfeld
  2022-09-22 16:55               ` [PATCH] random: use tasklet rather than workqueue for mixing fast pool Jason A. Donenfeld
  2022-09-28 11:23             ` 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker" Sebastian Siewior
  1 sibling, 1 reply; 13+ messages in thread
From: Jason A. Donenfeld @ 2022-09-22 16:45 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Sherry Yang, netdev, linux-kernel, linux-rt-users, Lai Jiangshan,
	Sebastian Siewior, Jack Vogel, Tariq Toukan, sultan

Hi Tejun,

On Wed, Sep 21, 2022 at 01:54:43PM -1000, Tejun Heo wrote:
> Hello,
> 
> On Thu, Sep 22, 2022 at 12:32:49AM +0200, Jason A. Donenfeld wrote:
> > What are our options? Investigate queue_work_on() bottlenecks? Move back
> > to the original pattern, but use raw spinlocks? Some thing else?
> 
> I doubt it's queue_work_on() itself if it's called at very high frequency as
> the duplicate calls would just fail to claim the PENDING bit and return but
> if it's being called at a high frequency, it'd be waking up a kthread over
> and over again, which can get pretty expensive. Maybe that ends competing
> with softirqd which is handling net rx or sth?

Huh, yea, interesting theory. Orrr, the one time that it _does_ pass the
test_and_set_bit check, the extra overhead here is enough to screw up
the latency? Both theories sound at least plausible.

> So, yeah, I'd try something which doesn't always involve scheduling and a
> context switch whether that's softirq, tasklet, or irq work.

Alright, I'll do that. I posted a diff for Sherry to try, and I'll make
that into a real patch and wait for her test.

> I probably am
> mistaken but I thought RT kernel pushes irq handling to threads so that
> these things can be handled sanely. Is this some special case?

It does mostly. But there's still a hard IRQ handler, somewhere, because
IRQs gotta IRQ, and the RNG benefits from getting a timestamp exactly
when that happens. So here we are.

Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] random: use tasklet rather than workqueue for mixing fast pool
  2022-09-22 16:45             ` Jason A. Donenfeld
@ 2022-09-22 16:55               ` Jason A. Donenfeld
  2022-09-26 22:04                 ` [PATCH v2] random: use immediate per-cpu timer " Jason A. Donenfeld
  0 siblings, 1 reply; 13+ messages in thread
From: Jason A. Donenfeld @ 2022-09-22 16:55 UTC (permalink / raw)
  To: Tejun Heo, netdev, linux-kernel, Jack Vogel, sultan, Sherry Yang
  Cc: Jason A. Donenfeld, stable

Previously, the fast pool was dumped into the main pool peroidically in
the fast pool's hard IRQ handler. This worked fine and there weren't
problems with it, until RT came around. Since RT converts spinlocks into
sleeping locks, problems cropped up. Rather than switching to raw
spinlocks, the RT developers preferred we make the transformation from
originally doing:

    do_some_stuff()
    spin_lock()
    do_some_other_stuff()
    spin_unlock()

to doing:

    do_some_stuff()
    queue_work_on(some_other_stuff_worker)

This is an ordinary pattern done all over the kernel. However, Sherry
noticed a 10% performance regression in qperf TCP over a 40gbps
InfiniBand card. Quoting her message:

> MT27500 Family [ConnectX-3] cards:
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> base lid: 0x6
> sm lid: 0x1
> state: 4: ACTIVE
> phys state: 5: LinkUp
> rate: 40 Gb/sec (4X QDR)
> link_layer: InfiniBand
>
> Cards are configured with IP addresses on private subnet for IPoIB
> performance testing.
> Regression identified in this bug is in TCP latency in this stack as reported
> by qperf tcp_lat metric:
>
> We have one system listen as a qperf server:
> [root@yourQperfServer ~]# qperf
>
> Have the other system connect to qperf server as a client (in this
> case, it’s X7 server with Mellanox card):
> [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat

Rather than incur the scheduling latency from queue_work_on, we can
instead switch to a tasklet, which will run on the same core -- exactly
what we want -- and happen during context transition without additional
scheduling latency, and minimized logic in the enqueuing path.

Hopefully this restores performance from prior to the RT changes.

Reported-by: Sherry Yang <sherry.yang@oracle.com>
Suggested-by: Sultan Alsawaf <sultan@kerneltoast.com>
Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/lkml/YyuREcGAXV9828w5@zx2c4.com/
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
Hi Sherry,

I'm not going to commit to this until I receive your `Tested-by:`, so
please let me know if this fixes the problem. If not, we'll try
something else.

Thanks,
Jason

 drivers/char/random.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 520a385c7dab..ad17b36cf977 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -918,13 +918,16 @@ EXPORT_SYMBOL_GPL(unregister_random_vmfork_notifier);
 #endif
 
 struct fast_pool {
-	struct work_struct mix;
+	struct tasklet_struct mix;
 	unsigned long pool[4];
 	unsigned long last;
 	unsigned int count;
 };
 
+static void mix_interrupt_randomness(struct tasklet_struct *work);
+
 static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
+	.mix = { .use_callback = true, .callback = mix_interrupt_randomness },
 #ifdef CONFIG_64BIT
 #define FASTMIX_PERM SIPHASH_PERMUTATION
 	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
@@ -973,7 +976,7 @@ int __cold random_online_cpu(unsigned int cpu)
 }
 #endif
 
-static void mix_interrupt_randomness(struct work_struct *work)
+static void mix_interrupt_randomness(struct tasklet_struct *work)
 {
 	struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
 	/*
@@ -1027,10 +1030,8 @@ void add_interrupt_randomness(int irq)
 	if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
 		return;
 
-	if (unlikely(!fast_pool->mix.func))
-		INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
 	fast_pool->count |= MIX_INFLIGHT;
-	queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+	tasklet_hi_schedule(&fast_pool->mix);
 }
 EXPORT_SYMBOL_GPL(add_interrupt_randomness);
 
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2] random: use immediate per-cpu timer rather than workqueue for mixing fast pool
  2022-09-22 16:55               ` [PATCH] random: use tasklet rather than workqueue for mixing fast pool Jason A. Donenfeld
@ 2022-09-26 22:04                 ` Jason A. Donenfeld
  2022-09-27  7:41                   ` David Laight
  0 siblings, 1 reply; 13+ messages in thread
From: Jason A. Donenfeld @ 2022-09-26 22:04 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: Jason A. Donenfeld, Sherry Yang, Paul Webb, Phillip Goerl,
	Jack Vogel, Nicky Veitch, Colm Harrington, Ramanan Govindarajan,
	Sebastian Andrzej Siewior, Tejun Heo, Sultan Alsawaf, stable

Previously, the fast pool was dumped into the main pool peroidically in
the fast pool's hard IRQ handler. This worked fine and there weren't
problems with it, until RT came around. Since RT converts spinlocks into
sleeping locks, problems cropped up. Rather than switching to raw
spinlocks, the RT developers preferred we make the transformation from
originally doing:

    do_some_stuff()
    spin_lock()
    do_some_other_stuff()
    spin_unlock()

to doing:

    do_some_stuff()
    queue_work_on(some_other_stuff_worker)

This is an ordinary pattern done all over the kernel. However, Sherry
noticed a 10% performance regression in qperf TCP over a 40gbps
InfiniBand card. Quoting her message:

> MT27500 Family [ConnectX-3] cards:
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> base lid: 0x6
> sm lid: 0x1
> state: 4: ACTIVE
> phys state: 5: LinkUp
> rate: 40 Gb/sec (4X QDR)
> link_layer: InfiniBand
>
> Cards are configured with IP addresses on private subnet for IPoIB
> performance testing.
> Regression identified in this bug is in TCP latency in this stack as reported
> by qperf tcp_lat metric:
>
> We have one system listen as a qperf server:
> [root@yourQperfServer ~]# qperf
>
> Have the other system connect to qperf server as a client (in this
> case, it’s X7 server with Mellanox card):
> [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat

Rather than incur the scheduling latency from queue_work_on, we can
instead switch to running on the next timer tick, on the same core,
deferrably so. This also batches things a bit more -- once per jiffy --
which is probably okay now that mix_interrupt_randomness() can credit
multiple bits at once. It still puts a bit of pressure on fast_mix(),
but hopefully that's acceptable.

Hopefully this restores performance from prior to the RT changes.

Reported-by: Sherry Yang <sherry.yang@oracle.com>
Reported-by: Paul Webb <paul.x.webb@oracle.com>
Cc: Sherry Yang <sherry.yang@oracle.com>
Cc: Phillip Goerl <phillip.goerl@oracle.com>
Cc: Jack Vogel <jack.vogel@oracle.com>
Cc: Nicky Veitch <nicky.veitch@oracle.com>
Cc: Colm Harrington <colm.harrington@oracle.com>
Cc: Ramanan Govindarajan <ramanan.govindarajan@oracle.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Sultan Alsawaf <sultan@kerneltoast.com>
Cc: stable@vger.kernel.org
Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 drivers/char/random.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 1cb53495e8f7..08bb46a50802 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -928,17 +928,20 @@ struct fast_pool {
 	unsigned long pool[4];
 	unsigned long last;
 	unsigned int count;
-	struct work_struct mix;
+	struct timer_list mix;
 };
 
+static void mix_interrupt_randomness(struct timer_list *work);
+
 static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
 #ifdef CONFIG_64BIT
 #define FASTMIX_PERM SIPHASH_PERMUTATION
-	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
+	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 },
 #else
 #define FASTMIX_PERM HSIPHASH_PERMUTATION
-	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
+	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 },
 #endif
+	.mix = __TIMER_INITIALIZER(mix_interrupt_randomness, TIMER_DEFERRABLE)
 };
 
 /*
@@ -980,7 +983,7 @@ int __cold random_online_cpu(unsigned int cpu)
 }
 #endif
 
-static void mix_interrupt_randomness(struct work_struct *work)
+static void mix_interrupt_randomness(struct timer_list *work)
 {
 	struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
 	/*
@@ -1034,10 +1037,11 @@ void add_interrupt_randomness(int irq)
 	if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
 		return;
 
-	if (unlikely(!fast_pool->mix.func))
-		INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
 	fast_pool->count |= MIX_INFLIGHT;
-	queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+	if (!timer_pending(&fast_pool->mix)) {
+		fast_pool->mix.expires = jiffies;
+		add_timer_on(&fast_pool->mix, raw_smp_processor_id());
+	}
 }
 EXPORT_SYMBOL_GPL(add_interrupt_randomness);
 
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* RE: [PATCH v2] random: use immediate per-cpu timer rather than workqueue for mixing fast pool
  2022-09-26 22:04                 ` [PATCH v2] random: use immediate per-cpu timer " Jason A. Donenfeld
@ 2022-09-27  7:41                   ` David Laight
  2022-09-27  8:23                     ` Jason A. Donenfeld
  0 siblings, 1 reply; 13+ messages in thread
From: David Laight @ 2022-09-27  7:41 UTC (permalink / raw)
  To: 'Jason A. Donenfeld', netdev, linux-kernel
  Cc: Sherry Yang, Paul Webb, Phillip Goerl, Jack Vogel, Nicky Veitch,
	Colm Harrington, Ramanan Govindarajan, Sebastian Andrzej Siewior,
	Tejun Heo, Sultan Alsawaf, stable

From: Jason A. Donenfeld
> Sent: 26 September 2022 23:05
> 
> Previously, the fast pool was dumped into the main pool peroidically in
> the fast pool's hard IRQ handler. This worked fine and there weren't
> problems with it, until RT came around. Since RT converts spinlocks into
> sleeping locks, problems cropped up. Rather than switching to raw
> spinlocks, the RT developers preferred we make the transformation from
> originally doing:
> 
>     do_some_stuff()
>     spin_lock()
>     do_some_other_stuff()
>     spin_unlock()
> 
> to doing:
> 
>     do_some_stuff()
>     queue_work_on(some_other_stuff_worker)
> 
> This is an ordinary pattern done all over the kernel. However, Sherry
> noticed a 10% performance regression in qperf TCP over a 40gbps
> InfiniBand card. Quoting her message:
> 
> > MT27500 Family [ConnectX-3] cards:
> > Infiniband device 'mlx4_0' port 1 status:
> > default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> > base lid: 0x6
> > sm lid: 0x1
> > state: 4: ACTIVE
> > phys state: 5: LinkUp
> > rate: 40 Gb/sec (4X QDR)
> > link_layer: InfiniBand
> >
> > Cards are configured with IP addresses on private subnet for IPoIB
> > performance testing.
> > Regression identified in this bug is in TCP latency in this stack as reported
> > by qperf tcp_lat metric:
> >
> > We have one system listen as a qperf server:
> > [root@yourQperfServer ~]# qperf
> >
> > Have the other system connect to qperf server as a client (in this
> > case, it’s X7 server with Mellanox card):
> > [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -
> oo msg_size:4K:1024K:*2 tcp_lat
> 
> Rather than incur the scheduling latency from queue_work_on, we can
> instead switch to running on the next timer tick, on the same core,
> deferrably so. This also batches things a bit more -- once per jiffy --
> which is probably okay now that mix_interrupt_randomness() can credit
> multiple bits at once. It still puts a bit of pressure on fast_mix(),
> but hopefully that's acceptable.

I though NOHZ systems didn't take a timer interrupt every 'jiffy'.
If that is true what actually happens?

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] random: use immediate per-cpu timer rather than workqueue for mixing fast pool
  2022-09-27  7:41                   ` David Laight
@ 2022-09-27  8:23                     ` Jason A. Donenfeld
  2022-09-27 10:42                       ` [PATCH v3] random: use expired per-cpu timer rather than wq " Jason A. Donenfeld
  0 siblings, 1 reply; 13+ messages in thread
From: Jason A. Donenfeld @ 2022-09-27  8:23 UTC (permalink / raw)
  To: David Laight
  Cc: netdev, linux-kernel, Sherry Yang, Paul Webb, Phillip Goerl,
	Jack Vogel, Nicky Veitch, Colm Harrington, Ramanan Govindarajan,
	Sebastian Andrzej Siewior, Tejun Heo, Sultan Alsawaf, stable

On Tue, Sep 27, 2022 at 07:41:52AM +0000, David Laight wrote:
> From: Jason A. Donenfeld
> > Sent: 26 September 2022 23:05
> > 
> > Previously, the fast pool was dumped into the main pool peroidically in
> > the fast pool's hard IRQ handler. This worked fine and there weren't
> > problems with it, until RT came around. Since RT converts spinlocks into
> > sleeping locks, problems cropped up. Rather than switching to raw
> > spinlocks, the RT developers preferred we make the transformation from
> > originally doing:
> > 
> >     do_some_stuff()
> >     spin_lock()
> >     do_some_other_stuff()
> >     spin_unlock()
> > 
> > to doing:
> > 
> >     do_some_stuff()
> >     queue_work_on(some_other_stuff_worker)
> > 
> > This is an ordinary pattern done all over the kernel. However, Sherry
> > noticed a 10% performance regression in qperf TCP over a 40gbps
> > InfiniBand card. Quoting her message:
> > 
> > > MT27500 Family [ConnectX-3] cards:
> > > Infiniband device 'mlx4_0' port 1 status:
> > > default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> > > base lid: 0x6
> > > sm lid: 0x1
> > > state: 4: ACTIVE
> > > phys state: 5: LinkUp
> > > rate: 40 Gb/sec (4X QDR)
> > > link_layer: InfiniBand
> > >
> > > Cards are configured with IP addresses on private subnet for IPoIB
> > > performance testing.
> > > Regression identified in this bug is in TCP latency in this stack as reported
> > > by qperf tcp_lat metric:
> > >
> > > We have one system listen as a qperf server:
> > > [root@yourQperfServer ~]# qperf
> > >
> > > Have the other system connect to qperf server as a client (in this
> > > case, it’s X7 server with Mellanox card):
> > > [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -
> > oo msg_size:4K:1024K:*2 tcp_lat
> > 
> > Rather than incur the scheduling latency from queue_work_on, we can
> > instead switch to running on the next timer tick, on the same core,
> > deferrably so. This also batches things a bit more -- once per jiffy --
> > which is probably okay now that mix_interrupt_randomness() can credit
> > multiple bits at once. It still puts a bit of pressure on fast_mix(),
> > but hopefully that's acceptable.
> 
> I though NOHZ systems didn't take a timer interrupt every 'jiffy'.
> If that is true what actually happens?

The TIMER_DEFERRABLE part of this patch is a mistake; I'm going to make
that 0. However, since expires==jiffies, there's no difference. It's
still undesirable though.

Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3] random: use expired per-cpu timer rather than wq for mixing fast pool
  2022-09-27  8:23                     ` Jason A. Donenfeld
@ 2022-09-27 10:42                       ` Jason A. Donenfeld
  2022-09-28 12:06                         ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 13+ messages in thread
From: Jason A. Donenfeld @ 2022-09-27 10:42 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: Jason A. Donenfeld, Sherry Yang, Paul Webb, Phillip Goerl,
	Jack Vogel, Nicky Veitch, Colm Harrington, Ramanan Govindarajan,
	Sebastian Andrzej Siewior, Dominik Brodowski, Tejun Heo,
	Sultan Alsawaf, stable

Previously, the fast pool was dumped into the main pool periodically in
the fast pool's hard IRQ handler. This worked fine and there weren't
problems with it, until RT came around. Since RT converts spinlocks into
sleeping locks, problems cropped up. Rather than switching to raw
spinlocks, the RT developers preferred we make the transformation from
originally doing:

    do_some_stuff()
    spin_lock()
    do_some_other_stuff()
    spin_unlock()

to doing:

    do_some_stuff()
    queue_work_on(some_other_stuff_worker)

This is an ordinary pattern done all over the kernel. However, Sherry
noticed a 10% performance regression in qperf TCP over a 40gbps
InfiniBand card. Quoting her message:

> MT27500 Family [ConnectX-3] cards:
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> base lid: 0x6
> sm lid: 0x1
> state: 4: ACTIVE
> phys state: 5: LinkUp
> rate: 40 Gb/sec (4X QDR)
> link_layer: InfiniBand
>
> Cards are configured with IP addresses on private subnet for IPoIB
> performance testing.
> Regression identified in this bug is in TCP latency in this stack as reported
> by qperf tcp_lat metric:
>
> We have one system listen as a qperf server:
> [root@yourQperfServer ~]# qperf
>
> Have the other system connect to qperf server as a client (in this
> case, it’s X7 server with Mellanox card):
> [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat

Rather than incur the scheduling latency from queue_work_on, we can
instead switch to running on the next timer tick, on the same core. This
also batches things a bit more -- once per jiffy -- which is okay now
that mix_interrupt_randomness() can credit multiple bits at once.

Reported-by: Sherry Yang <sherry.yang@oracle.com>
Tested-by: Paul Webb <paul.x.webb@oracle.com>
Cc: Sherry Yang <sherry.yang@oracle.com>
Cc: Phillip Goerl <phillip.goerl@oracle.com>
Cc: Jack Vogel <jack.vogel@oracle.com>
Cc: Nicky Veitch <nicky.veitch@oracle.com>
Cc: Colm Harrington <colm.harrington@oracle.com>
Cc: Ramanan Govindarajan <ramanan.govindarajan@oracle.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Tejun Heo <tj@kernel.org>
Cc: Sultan Alsawaf <sultan@kerneltoast.com>
Cc: stable@vger.kernel.org
Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 drivers/char/random.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index a90d96f4b3bb..e591c6aadca4 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -921,17 +921,20 @@ struct fast_pool {
 	unsigned long pool[4];
 	unsigned long last;
 	unsigned int count;
-	struct work_struct mix;
+	struct timer_list mix;
 };
 
+static void mix_interrupt_randomness(struct timer_list *work);
+
 static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
 #ifdef CONFIG_64BIT
 #define FASTMIX_PERM SIPHASH_PERMUTATION
-	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
+	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 },
 #else
 #define FASTMIX_PERM HSIPHASH_PERMUTATION
-	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
+	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 },
 #endif
+	.mix = __TIMER_INITIALIZER(mix_interrupt_randomness, 0)
 };
 
 /*
@@ -973,7 +976,7 @@ int __cold random_online_cpu(unsigned int cpu)
 }
 #endif
 
-static void mix_interrupt_randomness(struct work_struct *work)
+static void mix_interrupt_randomness(struct timer_list *work)
 {
 	struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
 	/*
@@ -1027,10 +1030,11 @@ void add_interrupt_randomness(int irq)
 	if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
 		return;
 
-	if (unlikely(!fast_pool->mix.func))
-		INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
 	fast_pool->count |= MIX_INFLIGHT;
-	queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+	if (!timer_pending(&fast_pool->mix)) {
+		fast_pool->mix.expires = jiffies;
+		add_timer_on(&fast_pool->mix, raw_smp_processor_id());
+	}
 }
 EXPORT_SYMBOL_GPL(add_interrupt_randomness);
 
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker"
  2022-09-21 23:54           ` Tejun Heo
  2022-09-22 16:45             ` Jason A. Donenfeld
@ 2022-09-28 11:23             ` Sebastian Siewior
  1 sibling, 0 replies; 13+ messages in thread
From: Sebastian Siewior @ 2022-09-28 11:23 UTC (permalink / raw)
  To: Tejun Heo, Sherry Yang
  Cc: Jason A. Donenfeld, netdev, linux-kernel, linux-rt-users,
	Lai Jiangshan, Jack Vogel, Tariq Toukan

On 2022-09-21 13:54:43 [-1000], Tejun Heo wrote:
> Hello,
Hi,

> On Thu, Sep 22, 2022 at 12:32:49AM +0200, Jason A. Donenfeld wrote:
> > What are our options? Investigate queue_work_on() bottlenecks? Move back
> > to the original pattern, but use raw spinlocks? Some thing else?
> 
> I doubt it's queue_work_on() itself if it's called at very high frequency as
> the duplicate calls would just fail to claim the PENDING bit and return but
> if it's being called at a high frequency, it'd be waking up a kthread over
> and over again, which can get pretty expensive. Maybe that ends competing
> with softirqd which is handling net rx or sth?

There is this (simplified):
|         if (new_count & MIX_INFLIGHT)
|                 return;
| 
|         if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
|                 return;
| 
|         fast_pool->count |= MIX_INFLIGHT;
|         queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);

at least 1k interrupts are needed and a second must pass before a worker
will be scheduled. Oh wait. We need only one of both. So how many
interrupts do we get per second?
Is the regression coming from more than 1k interrupts in less then a
second or a context switch each second? Because if it is a context
switch every second then I am surprised to see a 10% performance drop in
this case since should happen for other reasons, too unless the CPU is
isolated.

[ There isn't a massive claims of the PENDING bit or wakeups because
fast_pool is per-CPU and due to the MIX_INFLIGHT bit. ]

> So, yeah, I'd try something which doesn't always involve scheduling and a
> context switch whether that's softirq, tasklet, or irq work. I probably am
> mistaken but I thought RT kernel pushes irq handling to threads so that
> these things can be handled sanely. Is this some special case?

As Jason explained this part is invoked in the non-threaded part.

> Thanks.

Sebastian

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] random: use expired per-cpu timer rather than wq for mixing fast pool
  2022-09-27 10:42                       ` [PATCH v3] random: use expired per-cpu timer rather than wq " Jason A. Donenfeld
@ 2022-09-28 12:06                         ` Sebastian Andrzej Siewior
  2022-09-28 16:15                           ` Jason A. Donenfeld
  0 siblings, 1 reply; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-09-28 12:06 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: netdev, linux-kernel, Sherry Yang, Paul Webb, Phillip Goerl,
	Jack Vogel, Nicky Veitch, Colm Harrington, Ramanan Govindarajan,
	Dominik Brodowski, Tejun Heo, Sultan Alsawaf, stable

On 2022-09-27 12:42:33 [+0200], Jason A. Donenfeld wrote:
…
> This is an ordinary pattern done all over the kernel. However, Sherry
> noticed a 10% performance regression in qperf TCP over a 40gbps
> InfiniBand card. Quoting her message:
> 
> > MT27500 Family [ConnectX-3] cards:
> > Infiniband device 'mlx4_0' port 1 status:
…

While looking at the mlx4 driver, it looks like they don't use any NAPI
handling in their interrupt handler which _might_ be the case that they
handle more than 1k interrupts a second. I'm still curious to get that
ACKed from Sherry's side.

Jason, from random's point of view: deferring until 1k interrupts + 1sec
delay is not desired due to low entropy, right?

> Rather than incur the scheduling latency from queue_work_on, we can
> instead switch to running on the next timer tick, on the same core. This
> also batches things a bit more -- once per jiffy -- which is okay now
> that mix_interrupt_randomness() can credit multiple bits at once.

Hmmm. Do you see higher contention on input_pool.lock? Just asking
because if more than once CPUs invokes this timer callback aligned, then
they block on the same lock.

Sebastian

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] random: use expired per-cpu timer rather than wq for mixing fast pool
  2022-09-28 12:06                         ` Sebastian Andrzej Siewior
@ 2022-09-28 16:15                           ` Jason A. Donenfeld
  2022-09-29 14:18                             ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 13+ messages in thread
From: Jason A. Donenfeld @ 2022-09-28 16:15 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: netdev, linux-kernel, Sherry Yang, Paul Webb, Phillip Goerl,
	Jack Vogel, Nicky Veitch, Colm Harrington, Ramanan Govindarajan,
	Dominik Brodowski, Tejun Heo, Sultan Alsawaf, stable

Hi Sebastian,

On Wed, Sep 28, 2022 at 02:06:45PM +0200, Sebastian Andrzej Siewior wrote:
> On 2022-09-27 12:42:33 [+0200], Jason A. Donenfeld wrote:
> …
> > This is an ordinary pattern done all over the kernel. However, Sherry
> > noticed a 10% performance regression in qperf TCP over a 40gbps
> > InfiniBand card. Quoting her message:
> > 
> > > MT27500 Family [ConnectX-3] cards:
> > > Infiniband device 'mlx4_0' port 1 status:
> …
> 
> While looking at the mlx4 driver, it looks like they don't use any NAPI
> handling in their interrupt handler which _might_ be the case that they
> handle more than 1k interrupts a second. I'm still curious to get that
> ACKed from Sherry's side.

Are you sure about that? So far as I can tell drivers/net/ethernet/
mellanox/mlx4 has plenty of napi_schedule/napi_enable and such. Or are
you looking at the infiniband driver instead? I don't really know how
these interact.

But yea, if we've got a driver not using NAPI at 40gbps that's obviously
going to be a problem.

> Jason, from random's point of view: deferring until 1k interrupts + 1sec
> delay is not desired due to low entropy, right?

Definitely || is preferable to &&.

> 
> > Rather than incur the scheduling latency from queue_work_on, we can
> > instead switch to running on the next timer tick, on the same core. This
> > also batches things a bit more -- once per jiffy -- which is okay now
> > that mix_interrupt_randomness() can credit multiple bits at once.
> 
> Hmmm. Do you see higher contention on input_pool.lock? Just asking
> because if more than once CPUs invokes this timer callback aligned, then
> they block on the same lock.

I've been doing various experiments, sending mini patches to Oracle and
having them test this in their rig. So far, it looks like the cost of
the body of the worker itself doesn't matter much, but rather the cost
of the enqueueing function is key. Still investigating though.

It's a bit frustrating, as all I have to work with are results from the
tests, and no perf analysis. It'd be great if an engineer at Oracle was
capable of tackling this interactively, but at the moment it's just me
sending them patches. So we'll see. Getting closer though, albeit very
slowly.

Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] random: use expired per-cpu timer rather than wq for mixing fast pool
  2022-09-28 16:15                           ` Jason A. Donenfeld
@ 2022-09-29 14:18                             ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-09-29 14:18 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: netdev, linux-kernel, Sherry Yang, Paul Webb, Phillip Goerl,
	Jack Vogel, Nicky Veitch, Colm Harrington, Ramanan Govindarajan,
	Dominik Brodowski, Tejun Heo, Sultan Alsawaf, stable

On 2022-09-28 18:15:46 [+0200], Jason A. Donenfeld wrote:
> Hi Sebastian,
Hi Jason,

> On Wed, Sep 28, 2022 at 02:06:45PM +0200, Sebastian Andrzej Siewior wrote:
> > On 2022-09-27 12:42:33 [+0200], Jason A. Donenfeld wrote:
> > …
> > > This is an ordinary pattern done all over the kernel. However, Sherry
> > > noticed a 10% performance regression in qperf TCP over a 40gbps
> > > InfiniBand card. Quoting her message:
> > > 
> > > > MT27500 Family [ConnectX-3] cards:
> > > > Infiniband device 'mlx4_0' port 1 status:
> > …
> > 
> > While looking at the mlx4 driver, it looks like they don't use any NAPI
> > handling in their interrupt handler which _might_ be the case that they
> > handle more than 1k interrupts a second. I'm still curious to get that
> > ACKed from Sherry's side.
> 
> Are you sure about that? So far as I can tell drivers/net/ethernet/
> mellanox/mlx4 has plenty of napi_schedule/napi_enable and such. Or are
> you looking at the infiniband driver instead? I don't really know how
> these interact.

I've been looking at mlx4_msi_x_interrupt() and it appears that it
iterates over a ring buffer. I guess that mlx4_cq_completion() will
invoke mlx4_en_rx_irq() which schedules NAPI.

> But yea, if we've got a driver not using NAPI at 40gbps that's obviously
> going to be a problem.

So I'm wondering if we get 1 worker a second which kills the performance
or if we get more than 1k interrupts in less than second resulting in
more wakeups within a second..

> > Jason, from random's point of view: deferring until 1k interrupts + 1sec
> > delay is not desired due to low entropy, right?
> 
> Definitely || is preferable to &&.
> 
> > 
> > > Rather than incur the scheduling latency from queue_work_on, we can
> > > instead switch to running on the next timer tick, on the same core. This
> > > also batches things a bit more -- once per jiffy -- which is okay now
> > > that mix_interrupt_randomness() can credit multiple bits at once.
> > 
> > Hmmm. Do you see higher contention on input_pool.lock? Just asking
> > because if more than once CPUs invokes this timer callback aligned, then
> > they block on the same lock.
> 
> I've been doing various experiments, sending mini patches to Oracle and
> having them test this in their rig. So far, it looks like the cost of
> the body of the worker itself doesn't matter much, but rather the cost
> of the enqueueing function is key. Still investigating though.
> 
> It's a bit frustrating, as all I have to work with are results from the
> tests, and no perf analysis. It'd be great if an engineer at Oracle was
> capable of tackling this interactively, but at the moment it's just me
> sending them patches. So we'll see. Getting closer though, albeit very
> slowly.

Oh boy. Okay.

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-09-29 14:19 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <B1BC4DB8-8F40-4975-B8E7-9ED9BFF1D50E@oracle.com>
     [not found] ` <CAHmME9rUn0b5FKNFYkxyrn5cLiuW_nOxUZi3mRpPaBkUo9JWEQ@mail.gmail.com>
     [not found]   ` <04044E39-B150-4147-A090-3D942AF643DF@oracle.com>
     [not found]     ` <CAHmME9oKcqceoFpKkooCp5wriLLptpN=+WrrG0KcDWjBahM0bQ@mail.gmail.com>
     [not found]       ` <BD03BFF6-C369-4D34-A38B-49653F1CBC53@oracle.com>
2022-09-21 22:32         ` 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker" Jason A. Donenfeld
2022-09-21 23:35           ` Jason A. Donenfeld
2022-09-21 23:54           ` Tejun Heo
2022-09-22 16:45             ` Jason A. Donenfeld
2022-09-22 16:55               ` [PATCH] random: use tasklet rather than workqueue for mixing fast pool Jason A. Donenfeld
2022-09-26 22:04                 ` [PATCH v2] random: use immediate per-cpu timer " Jason A. Donenfeld
2022-09-27  7:41                   ` David Laight
2022-09-27  8:23                     ` Jason A. Donenfeld
2022-09-27 10:42                       ` [PATCH v3] random: use expired per-cpu timer rather than wq " Jason A. Donenfeld
2022-09-28 12:06                         ` Sebastian Andrzej Siewior
2022-09-28 16:15                           ` Jason A. Donenfeld
2022-09-29 14:18                             ` Sebastian Andrzej Siewior
2022-09-28 11:23             ` 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker" Sebastian Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).