All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters
@ 2015-01-25  0:18 Trond Myklebust
  2015-01-25  0:18 ` [PATCH 2/2] SUNRPC: Allow waiting on memory allocation Trond Myklebust
  2015-01-26 23:33 ` [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters Shirley Ma
  0 siblings, 2 replies; 11+ messages in thread
From: Trond Myklebust @ 2015-01-25  0:18 UTC (permalink / raw)
  To: linux-nfs

Increase the concurrency level for rpciod threads to allow for allocations
etc that happen in the RPCSEC_GSS layer. Also note that the NFSv4 byte range
locks may now need to allocate memory from inside rpciod.

Add the WQ_HIGHPRI flag to improve latency guarantees while we're at it.

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
---
 net/sunrpc/sched.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index d20f2329eea3..4f65ec28d2b4 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -1069,7 +1069,8 @@ static int rpciod_start(void)
 	 * Create the rpciod thread and wait for it to start.
 	 */
 	dprintk("RPC:       creating workqueue rpciod\n");
-	wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM, 1);
+	/* Note: highpri because network receive is latency sensitive */
+	wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
 	rpciod_workqueue = wq;
 	return rpciod_workqueue != NULL;
 }
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] SUNRPC: Allow waiting on memory allocation
  2015-01-25  0:18 [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters Trond Myklebust
@ 2015-01-25  0:18 ` Trond Myklebust
  2015-01-25 20:11   ` Chuck Lever
  2015-01-26 23:33 ` [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters Shirley Ma
  1 sibling, 1 reply; 11+ messages in thread
From: Trond Myklebust @ 2015-01-25  0:18 UTC (permalink / raw)
  To: linux-nfs

We should be safe now, as long as we don't do GFP_IO or higher allocations

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
---
 net/sunrpc/sched.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index 4f65ec28d2b4..b91fd9c597b4 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -844,10 +844,10 @@ static void rpc_async_schedule(struct work_struct *work)
 void *rpc_malloc(struct rpc_task *task, size_t size)
 {
 	struct rpc_buffer *buf;
-	gfp_t gfp = GFP_NOWAIT | __GFP_NOWARN;
+	gfp_t gfp = GFP_NOIO | __GFP_NOWARN;
 
 	if (RPC_IS_SWAPPER(task))
-		gfp |= __GFP_MEMALLOC;
+		gfp = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN;
 
 	size += sizeof(struct rpc_buffer);
 	if (size <= RPC_BUFFER_MAXSIZE)
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] SUNRPC: Allow waiting on memory allocation
  2015-01-25  0:18 ` [PATCH 2/2] SUNRPC: Allow waiting on memory allocation Trond Myklebust
@ 2015-01-25 20:11   ` Chuck Lever
  2015-01-25 21:29     ` Trond Myklebust
  0 siblings, 1 reply; 11+ messages in thread
From: Chuck Lever @ 2015-01-25 20:11 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Linux NFS Mailing List


On Jan 24, 2015, at 7:18 PM, Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> We should be safe now, as long as we don't do GFP_IO or higher allocations

Should the GFP flags in xprt_rdma_allocate() reflect this change
as well? The non-swap case uses GFP_NOFS currently, and the
swap case does not include GFP_MEMALLOC. These choices might be
out of date.

If so I can submit a patch on top of my existing for-3.20 series
that changes xprt_rdma_allocate() to use the same flags as
rpc_malloc().


> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
> net/sunrpc/sched.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
> index 4f65ec28d2b4..b91fd9c597b4 100644
> --- a/net/sunrpc/sched.c
> +++ b/net/sunrpc/sched.c
> @@ -844,10 +844,10 @@ static void rpc_async_schedule(struct work_struct *work)
> void *rpc_malloc(struct rpc_task *task, size_t size)
> {
> 	struct rpc_buffer *buf;
> -	gfp_t gfp = GFP_NOWAIT | __GFP_NOWARN;
> +	gfp_t gfp = GFP_NOIO | __GFP_NOWARN;
> 
> 	if (RPC_IS_SWAPPER(task))
> -		gfp |= __GFP_MEMALLOC;
> +		gfp = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN;
> 
> 	size += sizeof(struct rpc_buffer);
> 	if (size <= RPC_BUFFER_MAXSIZE)
> -- 
> 2.1.0
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] SUNRPC: Allow waiting on memory allocation
  2015-01-25 20:11   ` Chuck Lever
@ 2015-01-25 21:29     ` Trond Myklebust
  0 siblings, 0 replies; 11+ messages in thread
From: Trond Myklebust @ 2015-01-25 21:29 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Linux NFS Mailing List

On Sun, Jan 25, 2015 at 3:11 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>
>
> On Jan 24, 2015, at 7:18 PM, Trond Myklebust <trond.myklebust@primarydata.com> wrote:
>
> > We should be safe now, as long as we don't do GFP_IO or higher allocations
>
> Should the GFP flags in xprt_rdma_allocate() reflect this change
> as well? The non-swap case uses GFP_NOFS currently, and the

GFP_NOFS or GFP_NOWAIT? If the former, then that would be yet further
justification for PATCH 1/2.

> swap case does not include GFP_MEMALLOC. These choices might be
> out of date.
>
> If so I can submit a patch on top of my existing for-3.20 series
> that changes xprt_rdma_allocate() to use the same flags as
> rpc_malloc().

Yes. I think GFP_NOIO is the more conservative (and correct) approach
here, rather than GFP_NOFS. In particular it means that we won't
trigger any new swap-over-nfs activity.

-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters
  2015-01-25  0:18 [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters Trond Myklebust
  2015-01-25  0:18 ` [PATCH 2/2] SUNRPC: Allow waiting on memory allocation Trond Myklebust
@ 2015-01-26 23:33 ` Shirley Ma
  2015-01-27  0:34   ` Trond Myklebust
  1 sibling, 1 reply; 11+ messages in thread
From: Shirley Ma @ 2015-01-26 23:33 UTC (permalink / raw)
  To: Trond Myklebust, linux-nfs

Hello Trond,

workqueue WQ_UNBOUND flag is also needed. Some customer hit a problem, RT thread caused rpciod starvation. It is easy to reproduce it with running a cpu intensive workload with lower nice value than rpciod workqueue on the cpu the network interrupt is received.

I've also tested iozone and fio test with WQ_UNBOUND|WQ_SYSFS flag on for NFS/RDMA, NFS/IPoIB. The results are better than BOUND.

Thanks,
Shirley

On 01/24/2015 04:18 PM, Trond Myklebust wrote:
> Increase the concurrency level for rpciod threads to allow for allocations
> etc that happen in the RPCSEC_GSS layer. Also note that the NFSv4 byte range
> locks may now need to allocate memory from inside rpciod.
> 
> Add the WQ_HIGHPRI flag to improve latency guarantees while we're at it.
> 
> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>  net/sunrpc/sched.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
> index d20f2329eea3..4f65ec28d2b4 100644
> --- a/net/sunrpc/sched.c
> +++ b/net/sunrpc/sched.c
> @@ -1069,7 +1069,8 @@ static int rpciod_start(void)
>  	 * Create the rpciod thread and wait for it to start.
>  	 */
>  	dprintk("RPC:       creating workqueue rpciod\n");
> -	wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM, 1);
> +	/* Note: highpri because network receive is latency sensitive */
> +	wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
>  	rpciod_workqueue = wq;
>  	return rpciod_workqueue != NULL;
>  }
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters
  2015-01-26 23:33 ` [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters Shirley Ma
@ 2015-01-27  0:34   ` Trond Myklebust
  2015-01-27  2:30     ` Shirley Ma
  0 siblings, 1 reply; 11+ messages in thread
From: Trond Myklebust @ 2015-01-27  0:34 UTC (permalink / raw)
  To: Shirley Ma; +Cc: Linux NFS Mailing List

On Mon, Jan 26, 2015 at 6:33 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
> Hello Trond,
>
> workqueue WQ_UNBOUND flag is also needed. Some customer hit a problem, RT thread caused rpciod starvation. It is easy to reproduce it with running a cpu intensive workload with lower nice value than rpciod workqueue on the cpu the network interrupt is received.
>
> I've also tested iozone and fio test with WQ_UNBOUND|WQ_SYSFS flag on for NFS/RDMA, NFS/IPoIB. The results are better than BOUND.

It certainly does not seem appropriate to use WQ_SYSFS on a queue that
is used for swap, and Documentation/kernel-per-CPU-kthreads.txt makes
an extra strong argument against enabling it on the grounds that it is
not easily reversible.

As for unbound queues: they will almost by definition defeat all the
packet steering and balancing that is done in the networking layer in
the name of multi-process scalability (see
Documentation/networking/scaling.txt). While RDMA systems may or may
not care about that, ordinary networked systems probably do.
Don't most RDMA drivers allow you to balance those interrupts, at
least on the high end systems?


> Thanks,
> Shirley
>
> On 01/24/2015 04:18 PM, Trond Myklebust wrote:
>> Increase the concurrency level for rpciod threads to allow for allocations
>> etc that happen in the RPCSEC_GSS layer. Also note that the NFSv4 byte range
>> locks may now need to allocate memory from inside rpciod.
>>
>> Add the WQ_HIGHPRI flag to improve latency guarantees while we're at it.
>>
>> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
>> ---
>>  net/sunrpc/sched.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
>> index d20f2329eea3..4f65ec28d2b4 100644
>> --- a/net/sunrpc/sched.c
>> +++ b/net/sunrpc/sched.c
>> @@ -1069,7 +1069,8 @@ static int rpciod_start(void)
>>        * Create the rpciod thread and wait for it to start.
>>        */
>>       dprintk("RPC:       creating workqueue rpciod\n");
>> -     wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM, 1);
>> +     /* Note: highpri because network receive is latency sensitive */
>> +     wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
>>       rpciod_workqueue = wq;
>>       return rpciod_workqueue != NULL;
>>  }
>>



-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters
  2015-01-27  0:34   ` Trond Myklebust
@ 2015-01-27  2:30     ` Shirley Ma
  2015-01-27  3:17       ` Trond Myklebust
  0 siblings, 1 reply; 11+ messages in thread
From: Shirley Ma @ 2015-01-27  2:30 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Linux NFS Mailing List



On 01/26/2015 04:34 PM, Trond Myklebust wrote:
> On Mon, Jan 26, 2015 at 6:33 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
>> Hello Trond,
>>
>> workqueue WQ_UNBOUND flag is also needed. Some customer hit a problem, RT thread caused rpciod starvation. It is easy to reproduce it with running a cpu intensive workload with lower nice value than rpciod workqueue on the cpu the network interrupt is received.
>>
>> I've also tested iozone and fio test with WQ_UNBOUND|WQ_SYSFS flag on for NFS/RDMA, NFS/IPoIB. The results are better than BOUND.
> 
> It certainly does not seem appropriate to use WQ_SYSFS on a queue that
> is used for swap, and Documentation/kernel-per-CPU-kthreads.txt makes
> an extra strong argument against enabling it on the grounds that it is
> not easily reversible.

If enabling UNBOUND, I thought customizing workqueue would help. 

> As for unbound queues: they will almost by definition defeat all the
> packet steering and balancing that is done in the networking layer in
> the name of multi-process scalability (see
> Documentation/networking/scaling.txt). While RDMA systems may or may
> not care about that, ordinary networked systems probably do.
> Don't most RDMA drivers allow you to balance those interrupts, at
> least on the high end systems?

The problem was IRQ balance is not aware of which cpu is busy on the system, networking NIC interrupts can be directed to the busy CPU while other cpus are much lightly loaded. So packet steering and balancing in the networking layer doesn't have any benefit. The network workload can cause starvation in this situation it doesn't matter it's RDMA or Ethernet. The workaround solution is to unmask this busy cpu in irq balance, or manually set up irq smp affinity to avoid any interrupts on this cpu.

UNBOUND workqueue will choose same CPU to run the work if this CPU is not busy, it is scheduled to another CPU on the same NUMA node when this CPU is busy. So it doesn't defeat packet steering and balancing.  

> 
>> Thanks,
>> Shirley
>>
>> On 01/24/2015 04:18 PM, Trond Myklebust wrote:
>>> Increase the concurrency level for rpciod threads to allow for allocations
>>> etc that happen in the RPCSEC_GSS layer. Also note that the NFSv4 byte range
>>> locks may now need to allocate memory from inside rpciod.
>>>
>>> Add the WQ_HIGHPRI flag to improve latency guarantees while we're at it.
>>>
>>> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
>>> ---
>>>  net/sunrpc/sched.c | 3 ++-
>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
>>> index d20f2329eea3..4f65ec28d2b4 100644
>>> --- a/net/sunrpc/sched.c
>>> +++ b/net/sunrpc/sched.c
>>> @@ -1069,7 +1069,8 @@ static int rpciod_start(void)
>>>        * Create the rpciod thread and wait for it to start.
>>>        */
>>>       dprintk("RPC:       creating workqueue rpciod\n");
>>> -     wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM, 1);
>>> +     /* Note: highpri because network receive is latency sensitive */
>>> +     wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
>>>       rpciod_workqueue = wq;
>>>       return rpciod_workqueue != NULL;
>>>  }
>>>
> 
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters
  2015-01-27  2:30     ` Shirley Ma
@ 2015-01-27  3:17       ` Trond Myklebust
  2015-01-27  6:20         ` Shirley Ma
  0 siblings, 1 reply; 11+ messages in thread
From: Trond Myklebust @ 2015-01-27  3:17 UTC (permalink / raw)
  To: Shirley Ma; +Cc: Linux NFS Mailing List

On Mon, Jan 26, 2015 at 9:30 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
>
>
>
> On 01/26/2015 04:34 PM, Trond Myklebust wrote:
> > On Mon, Jan 26, 2015 at 6:33 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
> >> Hello Trond,
> >>
> >> workqueue WQ_UNBOUND flag is also needed. Some customer hit a problem, RT thread caused rpciod starvation. It is easy to reproduce it with running a cpu intensive workload with lower nice value than rpciod workqueue on the cpu the network interrupt is received.
> >>
> >> I've also tested iozone and fio test with WQ_UNBOUND|WQ_SYSFS flag on for NFS/RDMA, NFS/IPoIB. The results are better than BOUND.
> >
> > It certainly does not seem appropriate to use WQ_SYSFS on a queue that
> > is used for swap, and Documentation/kernel-per-CPU-kthreads.txt makes
> > an extra strong argument against enabling it on the grounds that it is
> > not easily reversible.
>
> If enabling UNBOUND, I thought customizing workqueue would help.
>
> > As for unbound queues: they will almost by definition defeat all the
> > packet steering and balancing that is done in the networking layer in
> > the name of multi-process scalability (see
> > Documentation/networking/scaling.txt). While RDMA systems may or may
> > not care about that, ordinary networked systems probably do.
> > Don't most RDMA drivers allow you to balance those interrupts, at
> > least on the high end systems?
>
> The problem was IRQ balance is not aware of which cpu is busy on the system, networking NIC interrupts can be directed to the busy CPU while other cpus are much lightly loaded. So packet steering and balancing in the networking layer doesn't have any benefit.

Sure it does. You are argument revolves around a corner case.

> The network workload can cause starvation in this situation it doesn't matter it's RDMA or Ethernet. The workaround solution is to unmask this busy cpu in irq balance, or manually set up irq smp affinity to avoid any interrupts on this cpu.

Then schedule it to system_unbound_wq instead. Why do you need to change rpciod?

> UNBOUND workqueue will choose same CPU to run the work if this CPU is not busy, it is scheduled to another CPU on the same NUMA node when this CPU is busy. So it doesn't defeat packet steering and balancing.

Yes it does. The whole point of packet steering is to maximise
locality in order to avoid contention for resources, locks etc for
networking data that is being consumed by just one (or a few)
processes. By spraying jobs across multiple threads, you are
reintroducing that contention. Furthermore, you are randomising the
processing order for _all_rpciod tasks.

Please read Documentation/workqueue.txt where it state clearly that:

        Unbound wq sacrifices locality but is useful for
        the following cases.

        * Wide fluctuation in the concurrency level requirement is
          expected and using bound wq may end up creating large number
          of mostly unused workers across different CPUs as the issuer
          hops through different CPUs.

        * Long running CPU intensive workloads which can be better
          managed by the system scheduler.

neither of which conditions are the common case for rpciod.

> >> Thanks,
> >> Shirley
> >>
> >> On 01/24/2015 04:18 PM, Trond Myklebust wrote:
> >>> Increase the concurrency level for rpciod threads to allow for allocations
> >>> etc that happen in the RPCSEC_GSS layer. Also note that the NFSv4 byte range
> >>> locks may now need to allocate memory from inside rpciod.
> >>>
> >>> Add the WQ_HIGHPRI flag to improve latency guarantees while we're at it.
> >>>
> >>> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> >>> ---
> >>>  net/sunrpc/sched.c | 3 ++-
> >>>  1 file changed, 2 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
> >>> index d20f2329eea3..4f65ec28d2b4 100644
> >>> --- a/net/sunrpc/sched.c
> >>> +++ b/net/sunrpc/sched.c
> >>> @@ -1069,7 +1069,8 @@ static int rpciod_start(void)
> >>>        * Create the rpciod thread and wait for it to start.
> >>>        */
> >>>       dprintk("RPC:       creating workqueue rpciod\n");
> >>> -     wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM, 1);
> >>> +     /* Note: highpri because network receive is latency sensitive */
> >>> +     wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
> >>>       rpciod_workqueue = wq;
> >>>       return rpciod_workqueue != NULL;
> >>>  }
> >>>
> >
> >
> >




-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters
  2015-01-27  3:17       ` Trond Myklebust
@ 2015-01-27  6:20         ` Shirley Ma
  2015-01-27  6:34           ` Trond Myklebust
  0 siblings, 1 reply; 11+ messages in thread
From: Shirley Ma @ 2015-01-27  6:20 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Linux NFS Mailing List


On 01/26/2015 07:17 PM, Trond Myklebust wrote:
> On Mon, Jan 26, 2015 at 9:30 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
>>
>>
>>
>> On 01/26/2015 04:34 PM, Trond Myklebust wrote:
>>> On Mon, Jan 26, 2015 at 6:33 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
>>>> Hello Trond,
>>>>
>>>> workqueue WQ_UNBOUND flag is also needed. Some customer hit a problem, RT thread caused rpciod starvation. It is easy to reproduce it with running a cpu intensive workload with lower nice value than rpciod workqueue on the cpu the network interrupt is received.
>>>>
>>>> I've also tested iozone and fio test with WQ_UNBOUND|WQ_SYSFS flag on for NFS/RDMA, NFS/IPoIB. The results are better than BOUND.
>>>
>>> It certainly does not seem appropriate to use WQ_SYSFS on a queue that
>>> is used for swap, and Documentation/kernel-per-CPU-kthreads.txt makes
>>> an extra strong argument against enabling it on the grounds that it is
>>> not easily reversible.
>>
>> If enabling UNBOUND, I thought customizing workqueue would help.
>>
>>> As for unbound queues: they will almost by definition defeat all the
>>> packet steering and balancing that is done in the networking layer in
>>> the name of multi-process scalability (see
>>> Documentation/networking/scaling.txt). While RDMA systems may or may
>>> not care about that, ordinary networked systems probably do.
>>> Don't most RDMA drivers allow you to balance those interrupts, at
>>> least on the high end systems?
>>
>> The problem was IRQ balance is not aware of which cpu is busy on the system, networking NIC interrupts can be directed to the busy CPU while other cpus are much lightly loaded. So packet steering and balancing in the networking layer doesn't have any benefit.
> 
> Sure it does. You are argument revolves around a corner case.
> 
>> The network workload can cause starvation in this situation it doesn't matter it's RDMA or Ethernet. The workaround solution is to unmask this busy cpu in irq balance, or manually set up irq smp affinity to avoid any interrupts on this cpu.
> 
> Then schedule it to system_unbound_wq instead. Why do you need to change rpciod?
> 
>> UNBOUND workqueue will choose same CPU to run the work if this CPU is not busy, it is scheduled to another CPU on the same NUMA node when this CPU is busy. So it doesn't defeat packet steering and balancing.
> 
> Yes it does. The whole point of packet steering is to maximise
> locality in order to avoid contention for resources, locks etc for
> networking data that is being consumed by just one (or a few)
> processes. By spraying jobs across multiple threads, you are
> reintroducing that contention. Furthermore, you are randomising the
> processing order for _all_rpciod tasks.
> Please read Documentation/workqueue.txt where it state clearly that:
> 
>         Unbound wq sacrifices locality but is useful for
>         the following cases.
> 
>         * Wide fluctuation in the concurrency level requirement is
>           expected and using bound wq may end up creating large number
>           of mostly unused workers across different CPUs as the issuer
>           hops through different CPUs.
> 
>         * Long running CPU intensive workloads which can be better
>           managed by the system scheduler.
> 
> neither of which conditions are the common case for rpciod.

My point was the locality is for performance, if locality doesn't gain much performance, then unbound is a better choice. From the data I've collected for rpciod workqueue bound vs unbound: when the local CPU is busy, bound's performance is much worse than unbound (which is scheduled to other remote CPU), when local CPU is not busy, bound and unbound performance (both latency and BW) seems similar. So unbound seems a better choice for rpciod.

>>>> Thanks,
>>>> Shirley
>>>>
>>>> On 01/24/2015 04:18 PM, Trond Myklebust wrote:
>>>>> Increase the concurrency level for rpciod threads to allow for allocations
>>>>> etc that happen in the RPCSEC_GSS layer. Also note that the NFSv4 byte range
>>>>> locks may now need to allocate memory from inside rpciod.
>>>>>
>>>>> Add the WQ_HIGHPRI flag to improve latency guarantees while we're at it.
>>>>>
>>>>> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
>>>>> ---
>>>>>  net/sunrpc/sched.c | 3 ++-
>>>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
>>>>> index d20f2329eea3..4f65ec28d2b4 100644
>>>>> --- a/net/sunrpc/sched.c
>>>>> +++ b/net/sunrpc/sched.c
>>>>> @@ -1069,7 +1069,8 @@ static int rpciod_start(void)
>>>>>        * Create the rpciod thread and wait for it to start.
>>>>>        */
>>>>>       dprintk("RPC:       creating workqueue rpciod\n");
>>>>> -     wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM, 1);
>>>>> +     /* Note: highpri because network receive is latency sensitive */
>>>>> +     wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
>>>>>       rpciod_workqueue = wq;
>>>>>       return rpciod_workqueue != NULL;
>>>>>  }
>>>>>
>>>
>>>
>>>
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters
  2015-01-27  6:20         ` Shirley Ma
@ 2015-01-27  6:34           ` Trond Myklebust
  2015-01-27 15:09             ` Shirley Ma
  0 siblings, 1 reply; 11+ messages in thread
From: Trond Myklebust @ 2015-01-27  6:34 UTC (permalink / raw)
  To: Shirley Ma; +Cc: Linux NFS Mailing List

On Tue, Jan 27, 2015 at 1:20 AM, Shirley Ma <shirley.ma@oracle.com> wrote:
>
> On 01/26/2015 07:17 PM, Trond Myklebust wrote:
>> On Mon, Jan 26, 2015 at 9:30 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
>>>
>>>
>>>
>>> On 01/26/2015 04:34 PM, Trond Myklebust wrote:
>>>> On Mon, Jan 26, 2015 at 6:33 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
>>>>> Hello Trond,
>>>>>
>>>>> workqueue WQ_UNBOUND flag is also needed. Some customer hit a problem, RT thread caused rpciod starvation. It is easy to reproduce it with running a cpu intensive workload with lower nice value than rpciod workqueue on the cpu the network interrupt is received.
>>>>>
>>>>> I've also tested iozone and fio test with WQ_UNBOUND|WQ_SYSFS flag on for NFS/RDMA, NFS/IPoIB. The results are better than BOUND.
>>>>
>>>> It certainly does not seem appropriate to use WQ_SYSFS on a queue that
>>>> is used for swap, and Documentation/kernel-per-CPU-kthreads.txt makes
>>>> an extra strong argument against enabling it on the grounds that it is
>>>> not easily reversible.
>>>
>>> If enabling UNBOUND, I thought customizing workqueue would help.
>>>
>>>> As for unbound queues: they will almost by definition defeat all the
>>>> packet steering and balancing that is done in the networking layer in
>>>> the name of multi-process scalability (see
>>>> Documentation/networking/scaling.txt). While RDMA systems may or may
>>>> not care about that, ordinary networked systems probably do.
>>>> Don't most RDMA drivers allow you to balance those interrupts, at
>>>> least on the high end systems?
>>>
>>> The problem was IRQ balance is not aware of which cpu is busy on the system, networking NIC interrupts can be directed to the busy CPU while other cpus are much lightly loaded. So packet steering and balancing in the networking layer doesn't have any benefit.
>>
>> Sure it does. You are argument revolves around a corner case.
>>
>>> The network workload can cause starvation in this situation it doesn't matter it's RDMA or Ethernet. The workaround solution is to unmask this busy cpu in irq balance, or manually set up irq smp affinity to avoid any interrupts on this cpu.
>>
>> Then schedule it to system_unbound_wq instead. Why do you need to change rpciod?
>>
>>> UNBOUND workqueue will choose same CPU to run the work if this CPU is not busy, it is scheduled to another CPU on the same NUMA node when this CPU is busy. So it doesn't defeat packet steering and balancing.
>>
>> Yes it does. The whole point of packet steering is to maximise
>> locality in order to avoid contention for resources, locks etc for
>> networking data that is being consumed by just one (or a few)
>> processes. By spraying jobs across multiple threads, you are
>> reintroducing that contention. Furthermore, you are randomising the
>> processing order for _all_rpciod tasks.
>> Please read Documentation/workqueue.txt where it state clearly that:
>>
>>         Unbound wq sacrifices locality but is useful for
>>         the following cases.
>>
>>         * Wide fluctuation in the concurrency level requirement is
>>           expected and using bound wq may end up creating large number
>>           of mostly unused workers across different CPUs as the issuer
>>           hops through different CPUs.
>>
>>         * Long running CPU intensive workloads which can be better
>>           managed by the system scheduler.
>>
>> neither of which conditions are the common case for rpciod.
>
> My point was the locality is for performance, if locality doesn't gain much performance, then unbound is a better choice. From the data I've collected for rpciod workqueue bound vs unbound: when the local CPU is busy, bound's performance is much worse than unbound (which is scheduled to other remote CPU), when local CPU is not busy, bound and unbound performance (both latency and BW) seems similar. So unbound seems a better choice for rpciod.
>

You have supplied 1 data point on a platform (RDMA) that has no users.
I see no reason to make a change.


-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters
  2015-01-27  6:34           ` Trond Myklebust
@ 2015-01-27 15:09             ` Shirley Ma
  0 siblings, 0 replies; 11+ messages in thread
From: Shirley Ma @ 2015-01-27 15:09 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Linux NFS Mailing List


On 01/26/2015 10:34 PM, Trond Myklebust wrote:
> On Tue, Jan 27, 2015 at 1:20 AM, Shirley Ma <shirley.ma@oracle.com> wrote:
>>
>> On 01/26/2015 07:17 PM, Trond Myklebust wrote:
>>> On Mon, Jan 26, 2015 at 9:30 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
>>>>
>>>>
>>>>
>>>> On 01/26/2015 04:34 PM, Trond Myklebust wrote:
>>>>> On Mon, Jan 26, 2015 at 6:33 PM, Shirley Ma <shirley.ma@oracle.com> wrote:
>>>>>> Hello Trond,
>>>>>>
>>>>>> workqueue WQ_UNBOUND flag is also needed. Some customer hit a problem, RT thread caused rpciod starvation. It is easy to reproduce it with running a cpu intensive workload with lower nice value than rpciod workqueue on the cpu the network interrupt is received.
>>>>>>
>>>>>> I've also tested iozone and fio test with WQ_UNBOUND|WQ_SYSFS flag on for NFS/RDMA, NFS/IPoIB. The results are better than BOUND.
>>>>>
>>>>> It certainly does not seem appropriate to use WQ_SYSFS on a queue that
>>>>> is used for swap, and Documentation/kernel-per-CPU-kthreads.txt makes
>>>>> an extra strong argument against enabling it on the grounds that it is
>>>>> not easily reversible.
>>>>
>>>> If enabling UNBOUND, I thought customizing workqueue would help.
>>>>
>>>>> As for unbound queues: they will almost by definition defeat all the
>>>>> packet steering and balancing that is done in the networking layer in
>>>>> the name of multi-process scalability (see
>>>>> Documentation/networking/scaling.txt). While RDMA systems may or may
>>>>> not care about that, ordinary networked systems probably do.
>>>>> Don't most RDMA drivers allow you to balance those interrupts, at
>>>>> least on the high end systems?
>>>>
>>>> The problem was IRQ balance is not aware of which cpu is busy on the system, networking NIC interrupts can be directed to the busy CPU while other cpus are much lightly loaded. So packet steering and balancing in the networking layer doesn't have any benefit.
>>>
>>> Sure it does. You are argument revolves around a corner case.
>>>
>>>> The network workload can cause starvation in this situation it doesn't matter it's RDMA or Ethernet. The workaround solution is to unmask this busy cpu in irq balance, or manually set up irq smp affinity to avoid any interrupts on this cpu.
>>>
>>> Then schedule it to system_unbound_wq instead. Why do you need to change rpciod?
>>>
>>>> UNBOUND workqueue will choose same CPU to run the work if this CPU is not busy, it is scheduled to another CPU on the same NUMA node when this CPU is busy. So it doesn't defeat packet steering and balancing.
>>>
>>> Yes it does. The whole point of packet steering is to maximise
>>> locality in order to avoid contention for resources, locks etc for
>>> networking data that is being consumed by just one (or a few)
>>> processes. By spraying jobs across multiple threads, you are
>>> reintroducing that contention. Furthermore, you are randomising the
>>> processing order for _all_rpciod tasks.
>>> Please read Documentation/workqueue.txt where it state clearly that:
>>>
>>>         Unbound wq sacrifices locality but is useful for
>>>         the following cases.
>>>
>>>         * Wide fluctuation in the concurrency level requirement is
>>>           expected and using bound wq may end up creating large number
>>>           of mostly unused workers across different CPUs as the issuer
>>>           hops through different CPUs.
>>>
>>>         * Long running CPU intensive workloads which can be better
>>>           managed by the system scheduler.
>>>
>>> neither of which conditions are the common case for rpciod.
>>
>> My point was the locality is for performance, if locality doesn't gain much performance, then unbound is a better choice. From the data I've collected for rpciod workqueue bound vs unbound: when the local CPU is busy, bound's performance is much worse than unbound (which is scheduled to other remote CPU), when local CPU is not busy, bound and unbound performance (both latency and BW) seems similar. So unbound seems a better choice for rpciod.
>>
> 
> You have supplied 1 data point on a platform (RDMA) that has no users.
> I see no reason to make a change.

I've also tested TCP(IPoIB). I am going to test GbitE once I get the set up. Based upon the data I've collected about rpciod workqueue, I am expecting similar results. In rpciod workqueue, unbound is better than bound. There is no rush to make a decision now, the performance outcome of unbound and bound workqueue depends on a couple factors: latency, cpu workload, scheduling ...... My observation so far, using above guideline to apply unbound and bound flag is not sufficient. It should add latency, cpu load factors and other factors as well.

Thanks
Shirley
 
 

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-01-27 15:09 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-25  0:18 [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters Trond Myklebust
2015-01-25  0:18 ` [PATCH 2/2] SUNRPC: Allow waiting on memory allocation Trond Myklebust
2015-01-25 20:11   ` Chuck Lever
2015-01-25 21:29     ` Trond Myklebust
2015-01-26 23:33 ` [PATCH 1/2] SUNRPC: Adjust rpciod workqueue parameters Shirley Ma
2015-01-27  0:34   ` Trond Myklebust
2015-01-27  2:30     ` Shirley Ma
2015-01-27  3:17       ` Trond Myklebust
2015-01-27  6:20         ` Shirley Ma
2015-01-27  6:34           ` Trond Myklebust
2015-01-27 15:09             ` Shirley Ma

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.