linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Maor Gottlieb <maorg@mellanox.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>,
	Doug Ledford <dledford@redhat.com>,
	linux-rdma@vger.kernel.org
Subject: Re: [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup
Date: Thu, 25 Jun 2020 11:26:35 +0300	[thread overview]
Message-ID: <20200625082635.GC1446285@unreal> (raw)
In-Reply-To: <9e018ff8-9ba1-4dd2-fb5b-ce22b81b2c52@mellanox.com>

On Wed, Jun 24, 2020 at 05:48:27PM +0300, Maor Gottlieb wrote:
>
> On 6/24/2020 5:00 PM, Jason Gunthorpe wrote:
> > On Wed, Jun 24, 2020 at 01:42:49PM +0300, Maor Gottlieb wrote:
> > > On 6/23/2020 9:49 PM, Jason Gunthorpe wrote:
> > > > On Tue, Jun 23, 2020 at 09:15:06PM +0300, Leon Romanovsky wrote:
> > > > > On Tue, Jun 23, 2020 at 02:52:00PM -0300, Jason Gunthorpe wrote:
> > > > > > On Tue, Jun 23, 2020 at 02:15:31PM +0300, Leon Romanovsky wrote:
> > > > > > > From: Maor Gottlieb <maorg@mellanox.com>
> > > > > > >
> > > > > > > Replace the mutex with read write semaphore and use xarray instead
> > > > > > > of linked list for XRC target QPs. This will give faster XRC target
> > > > > > > lookup. In addition, when QP is closed, don't insert it back to the
> > > > > > > xarray if the destroy command failed.
> > > > > > >
> > > > > > > Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
> > > > > > > Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> > > > > > >    drivers/infiniband/core/verbs.c | 57 ++++++++++++---------------------
> > > > > > >    include/rdma/ib_verbs.h         |  5 ++-
> > > > > > >    2 files changed, 23 insertions(+), 39 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> > > > > > > index d66a0ad62077..1ccbe43e33cd 100644
> > > > > > > +++ b/drivers/infiniband/core/verbs.c
> > > > > > > @@ -1090,13 +1090,6 @@ static void __ib_shared_qp_event_handler(struct ib_event *event, void *context)
> > > > > > >    	spin_unlock_irqrestore(&qp->device->qp_open_list_lock, flags);
> > > > > > >    }
> > > > > > >
> > > > > > > -static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp)
> > > > > > > -{
> > > > > > > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > > > > > > -	list_add(&qp->xrcd_list, &xrcd->tgt_qp_list);
> > > > > > > -	mutex_unlock(&xrcd->tgt_qp_mutex);
> > > > > > > -}
> > > > > > > -
> > > > > > >    static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
> > > > > > >    				  void (*event_handler)(struct ib_event *, void *),
> > > > > > >    				  void *qp_context)
> > > > > > > @@ -1139,16 +1132,15 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
> > > > > > >    	if (qp_open_attr->qp_type != IB_QPT_XRC_TGT)
> > > > > > >    		return ERR_PTR(-EINVAL);
> > > > > > >
> > > > > > > -	qp = ERR_PTR(-EINVAL);
> > > > > > > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > > > > > > -	list_for_each_entry(real_qp, &xrcd->tgt_qp_list, xrcd_list) {
> > > > > > > -		if (real_qp->qp_num == qp_open_attr->qp_num) {
> > > > > > > -			qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
> > > > > > > -					  qp_open_attr->qp_context);
> > > > > > > -			break;
> > > > > > > -		}
> > > > > > > +	down_read(&xrcd->tgt_qps_rwsem);
> > > > > > > +	real_qp = xa_load(&xrcd->tgt_qps, qp_open_attr->qp_num);
> > > > > > > +	if (!real_qp) {
> > > > > > Don't we already have a xarray indexed against qp_num in res_track?
> > > > > > Can we use it somehow?
> > > > > We don't have restrack for XRC, we will need somehow manage QP-to-XRC
> > > > > connection there.
> > > > It is not xrc, this is just looking up a qp and checking if it is part
> > > > of the xrcd
> > > >
> > > > Jason
> > > It's the XRC target  QP and it is not tracked.
> > Really? Something called 'real_qp' isn't stored in the restrack?
> > Doesn't that sound like a bug already?
> >
> > Jason
>
> Bug / limitation. see the below comment from core_priv.h:
>
>         /*
>          * We don't track XRC QPs for now, because they don't have PD
>          * and more importantly they are created internaly by driver,
>          * see mlx5 create_dev_resources() as an example.
>          */
>
> Leon, the PD is a real limitation? regarding the second part (mlx5),  you
> just sent patches that change it,right?

The second part is not relevant now, but the first part is still
relevant, due to the check in restrack.c.

  131         case RDMA_RESTRACK_QP:
  132                 pd = container_of(res, struct ib_qp, res)->pd;
  133                 if (!pd) {
  134                         WARN_ONCE(true, "XRC QPs are not supported\n");
  135                         /* Survive, despite the programmer's error */
  136                         res->kern_name = " ";
  137                 }
  138                 break;


The reason to it that "regular" QPs has the name of their "creator"
inside PD which doesn't exist for XRC. It is possible to change and
make special case for the XRC, but all places that touch "kern_name"
need to be audited.

It is in my roadmap after allocation work will be finished and we will
introduce proper reference counting for the QPs.

Thanks

>

      reply	other threads:[~2020-06-25  8:26 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-23 11:15 [PATCH rdma-next v1 0/2] Convert XRC to use xarray Leon Romanovsky
2020-06-23 11:15 ` [PATCH rdma-next v1 1/2] RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain Leon Romanovsky
2020-07-02 18:27   ` Jason Gunthorpe
2020-07-03  6:25     ` Leon Romanovsky
2020-07-03 12:00       ` Jason Gunthorpe
2020-06-23 11:15 ` [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup Leon Romanovsky
2020-06-23 17:52   ` Jason Gunthorpe
2020-06-23 18:15     ` Leon Romanovsky
2020-06-23 18:49       ` Jason Gunthorpe
2020-06-24 10:42         ` Maor Gottlieb
2020-06-24 14:00           ` Jason Gunthorpe
2020-06-24 14:48             ` Maor Gottlieb
2020-06-25  8:26               ` Leon Romanovsky [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200625082635.GC1446285@unreal \
    --to=leon@kernel.org \
    --cc=dledford@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=linux-rdma@vger.kernel.org \
    --cc=maorg@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).