All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: Leon Romanovsky <leon@kernel.org>,
	Doug Ledford <dledford@redhat.com>,
	linux-rdma <linux-rdma@vger.kernel.org>
Subject: Re: [PATCH rdma-next 06/14] RDMA/cma: Add missing locking to rdma_accept()
Date: Tue, 9 Feb 2021 11:40:14 -0400	[thread overview]
Message-ID: <20210209154014.GO4247@nvidia.com> (raw)
In-Reply-To: <C69C843C-A2D5-4A17-ACEE-67056864DDA7@oracle.com>

On Tue, Feb 09, 2021 at 02:46:48PM +0000, Chuck Lever wrote:
> Howdy-
> 
> > On Aug 18, 2020, at 8:05 AM, Leon Romanovsky <leon@kernel.org> wrote:
> > 
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > 
> > In almost all cases rdma_accept() is called under the handler_mutex by
> > ULPs from their handler callbacks. The one exception was ucma which did
> > not get the handler_mutex.
> 
> It turns out that the RPC/RDMA server also does not invoke rdma_accept()
> from its CM event handler.
> 
> See net/sunrpc/xprtrdma/svc_rdma_transport.c:svc_rdma_accept()
> 
> When lock debugging is enabled, the lockdep assertion in rdma_accept()
> fires on every RPC/RDMA connection.
> 
> I'm not quite sure what to do about this.

Add the manual handler mutex calls like ucma did:

> > +void rdma_lock_handler(struct rdma_cm_id *id)
> > +{
> > +	struct rdma_id_private *id_priv =
> > +		container_of(id, struct rdma_id_private, id);
> > +
> > +	mutex_lock(&id_priv->handler_mutex);
> > +}
> > +EXPORT_SYMBOL(rdma_lock_handler);
> > +
> > +void rdma_unlock_handler(struct rdma_cm_id *id)
> > +{
> > +	struct rdma_id_private *id_priv =
> > +		container_of(id, struct rdma_id_private, id);
> > +
> > +	mutex_unlock(&id_priv->handler_mutex);
> > +}
> > +EXPORT_SYMBOL(rdma_unlock_handler);

But you need to audit carefully that this doesn't have messed up
concurrancy.. IIRC this means being careful that no events that could
be delivered before you get to accepting could have done something
they shouldn't do, like free the cm_id for instance.

Jason

  reply	other threads:[~2021-02-09 15:41 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-18 12:05 [PATCH rdma-next 00/14] Cleanup locking and events in ucma Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 01/14] RDMA/ucma: Fix refcount 0 incr in ucma_get_ctx() Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 02/14] RDMA/ucma: Remove unnecessary locking of file->ctx_list in close Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 03/14] RDMA/ucma: Consolidate the two destroy flows Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 04/14] RDMA/ucma: Fix error cases around ucma_alloc_ctx() Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 05/14] RDMA/ucma: Remove mc_list and rely on xarray Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 06/14] RDMA/cma: Add missing locking to rdma_accept() Leon Romanovsky
2021-02-09 14:46   ` Chuck Lever
2021-02-09 15:40     ` Jason Gunthorpe [this message]
2020-08-18 12:05 ` [PATCH rdma-next 07/14] RDMA/ucma: Do not use file->mut to lock destroying Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 08/14] RDMA/ucma: Fix the locking of ctx->file Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 09/14] RDMA/ucma: Fix locking for ctx->events_reported Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 10/14] RDMA/ucma: Add missing locking around rdma_leave_multicast() Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 11/14] RDMA/ucma: Change backlog into an atomic Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 12/14] RDMA/ucma: Narrow file->mut in ucma_event_handler() Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 13/14] RDMA/ucma: Rework how new connections are passed through event delivery Leon Romanovsky
2020-08-18 12:05 ` [PATCH rdma-next 14/14] RDMA/ucma: Remove closing and the close_wq Leon Romanovsky
2020-08-27 11:39 ` [PATCH rdma-next 00/14] Cleanup locking and events in ucma Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210209154014.GO4247@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=chuck.lever@oracle.com \
    --cc=dledford@redhat.com \
    --cc=leon@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.