linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec
@ 2019-01-24  7:02 Nicolas Morey-Chaisemartin
  2019-01-24 16:04 ` Tom Talpey
  2019-01-25 17:32 ` Chuck Lever
  0 siblings, 2 replies; 4+ messages in thread
From: Nicolas Morey-Chaisemartin @ 2019-01-24  7:02 UTC (permalink / raw)
  To: linux-rdma; +Cc: linux-nfs, chuck.lever

Make sure the device has at least 2 completion vectors before allocating to compvec#1

Fixes: a4699f5647f3 (xprtrdma: Put Send CQ in IB_POLL_WORKQUEUE mode)
Signed-off-by: Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
---
Fixes since v1:
 - Use num_comp_vector instead online_cpus

 net/sunrpc/xprtrdma/verbs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index b725911c0f3f..db913bcef984 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -546,7 +546,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
 
 	sendcq = ib_alloc_cq(ia->ri_device, NULL,
 			     ep->rep_attr.cap.max_send_wr + 1,
-			     1, IB_POLL_WORKQUEUE);
+			     ia->ri_device->num_comp_vectors > 1 ? 1 : 0, IB_POLL_WORKQUEUE);
 	if (IS_ERR(sendcq)) {
 		rc = PTR_ERR(sendcq);
 		dprintk("RPC:       %s: failed to create send CQ: %i\n",
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec
  2019-01-24  7:02 [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec Nicolas Morey-Chaisemartin
@ 2019-01-24 16:04 ` Tom Talpey
  2019-01-24 19:24   ` Chuck Lever
  2019-01-25 17:32 ` Chuck Lever
  1 sibling, 1 reply; 4+ messages in thread
From: Tom Talpey @ 2019-01-24 16:04 UTC (permalink / raw)
  To: Nicolas Morey-Chaisemartin, linux-rdma; +Cc: linux-nfs, chuck.lever

On 1/23/2019 11:02 PM, Nicolas Morey-Chaisemartin wrote:
> Make sure the device has at least 2 completion vectors before allocating to compvec#1

I agree this fixes the bug, but wouldn't it be better to attempt
to steer the affinity to chosen cores, rather than unconditionally
using static values? At least make some determination that the
device, and the memory that was just allocated, were pointed to
slightly optimal NUMA node to handle its completions.

Does the verbs API not support this? I thought that was discussed
a while back.

Tom.

> Fixes: a4699f5647f3 (xprtrdma: Put Send CQ in IB_POLL_WORKQUEUE mode)
> Signed-off-by: Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
> ---
> Fixes since v1:
>   - Use num_comp_vector instead online_cpus
> 
>   net/sunrpc/xprtrdma/verbs.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> index b725911c0f3f..db913bcef984 100644
> --- a/net/sunrpc/xprtrdma/verbs.c
> +++ b/net/sunrpc/xprtrdma/verbs.c
> @@ -546,7 +546,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
>   
>   	sendcq = ib_alloc_cq(ia->ri_device, NULL,
>   			     ep->rep_attr.cap.max_send_wr + 1,
> -			     1, IB_POLL_WORKQUEUE);
> +			     ia->ri_device->num_comp_vectors > 1 ? 1 : 0, IB_POLL_WORKQUEUE);
>   	if (IS_ERR(sendcq)) {
>   		rc = PTR_ERR(sendcq);
>   		dprintk("RPC:       %s: failed to create send CQ: %i\n",
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec
  2019-01-24 16:04 ` Tom Talpey
@ 2019-01-24 19:24   ` Chuck Lever
  0 siblings, 0 replies; 4+ messages in thread
From: Chuck Lever @ 2019-01-24 19:24 UTC (permalink / raw)
  To: Tom Talpey; +Cc: Nicolas Morey-Chaisemartin, linux-rdma, Linux NFS Mailing List



> On Jan 24, 2019, at 8:04 AM, Tom Talpey <tom@talpey.com> wrote:
> 
> On 1/23/2019 11:02 PM, Nicolas Morey-Chaisemartin wrote:
>> Make sure the device has at least 2 completion vectors before allocating to compvec#1
> 
> I agree this fixes the bug, but wouldn't it be better to attempt
> to steer the affinity to chosen cores, rather than unconditionally
> using static values?

The goal for the moment is to create a patch that can be
backported to stable kernels with minimal fuss.


> At least make some determination that the
> device, and the memory that was just allocated, were pointed to
> slightly optimal NUMA node to handle its completions.
> 
> Does the verbs API not support this? I thought that was discussed
> a while back.

It is true that spreading CQs across interrupt vectors is
desirable, but for various reasons, it has been difficult to
do it in a way that doesn't have bad behavior in some common
cases.

Sagi has constructed some infrastructure that works well for
ULPs that can create many transport instances, but we don't
yet have that ability for NFS.

I'm open to suggestions for a subsequent patch after Nicolas'
fix is merged.


> Tom.
> 
>> Fixes: a4699f5647f3 (xprtrdma: Put Send CQ in IB_POLL_WORKQUEUE mode)
>> Signed-off-by: Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
>> ---
>> Fixes since v1:
>>  - Use num_comp_vector instead online_cpus
>>  net/sunrpc/xprtrdma/verbs.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
>> index b725911c0f3f..db913bcef984 100644
>> --- a/net/sunrpc/xprtrdma/verbs.c
>> +++ b/net/sunrpc/xprtrdma/verbs.c
>> @@ -546,7 +546,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
>>    	sendcq = ib_alloc_cq(ia->ri_device, NULL,
>>  			     ep->rep_attr.cap.max_send_wr + 1,
>> -			     1, IB_POLL_WORKQUEUE);
>> +			     ia->ri_device->num_comp_vectors > 1 ? 1 : 0, IB_POLL_WORKQUEUE);
>>  	if (IS_ERR(sendcq)) {
>>  		rc = PTR_ERR(sendcq);
>>  		dprintk("RPC:       %s: failed to create send CQ: %i\n",

--
Chuck Lever




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec
  2019-01-24  7:02 [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec Nicolas Morey-Chaisemartin
  2019-01-24 16:04 ` Tom Talpey
@ 2019-01-25 17:32 ` Chuck Lever
  1 sibling, 0 replies; 4+ messages in thread
From: Chuck Lever @ 2019-01-25 17:32 UTC (permalink / raw)
  To: Nicolas Morey-Chaisemartin; +Cc: linux-rdma, Linux NFS Mailing List



> On Jan 23, 2019, at 11:02 PM, Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com> wrote:
> 
> Make sure the device has at least 2 completion vectors before allocating to compvec#1
> 
> Fixes: a4699f5647f3 (xprtrdma: Put Send CQ in IB_POLL_WORKQUEUE mode)
> Signed-off-by: Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
> ---
> Fixes since v1:
> - Use num_comp_vector instead online_cpus
> 
> net/sunrpc/xprtrdma/verbs.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> index b725911c0f3f..db913bcef984 100644
> --- a/net/sunrpc/xprtrdma/verbs.c
> +++ b/net/sunrpc/xprtrdma/verbs.c
> @@ -546,7 +546,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
> 
> 	sendcq = ib_alloc_cq(ia->ri_device, NULL,
> 			     ep->rep_attr.cap.max_send_wr + 1,
> -			     1, IB_POLL_WORKQUEUE);
> +			     ia->ri_device->num_comp_vectors > 1 ? 1 : 0, IB_POLL_WORKQUEUE);

Nit: You should split this line. Otherwise:

Reviewed-by: Chuck Lever <chuck.lever@oracle.com>

And I'll look into smarter compvec allocation again.


> 	if (IS_ERR(sendcq)) {
> 		rc = PTR_ERR(sendcq);
> 		dprintk("RPC:       %s: failed to create send CQ: %i\n",
> -- 
> 2.18.0
> 

--
Chuck Lever




^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-01-25 17:32 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-24  7:02 [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec Nicolas Morey-Chaisemartin
2019-01-24 16:04 ` Tom Talpey
2019-01-24 19:24   ` Chuck Lever
2019-01-25 17:32 ` Chuck Lever

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).