All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: Doug Ledford <dledford@redhat.com>,
	Leon Romanovsky <leonro@nvidia.com>,
	Eli Cohen <eli@dev.mellanox.co.il>,
	<linux-kernel@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
	Roland Dreier <rolandd@cisco.com>
Subject: Re: [PATCH rdma-next 0/8] Cleanup and fix the CMA state machine
Date: Thu, 17 Sep 2020 09:33:16 -0300	[thread overview]
Message-ID: <20200917123316.GA113655@nvidia.com> (raw)
In-Reply-To: <20200902081122.745412-1-leon@kernel.org>

On Wed, Sep 02, 2020 at 11:11:14AM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
> 
> >From Jason:
> 
> The RDMA CMA continues to attract syzkaller bugs due to its somewhat loose
> operation of its FSM. Audit and scrub the whole thing to follow modern
> expectations.
> 
> Overall the design elements are broadly:
> 
> - The ULP entry points MUST NOT run in parallel with each other. The ULP
>   is solely responsible for preventing this.
> 
> - If the ULP returns !0 from it's event callback it MUST guarentee that no
>   other ULP threads are touching the cm_id or calling into any RDMA CM
>   entry point.
> 
> - ULP entry points can sometimes run conurrently with handler callbacks,
>   although it is tricky because there are many entry points that exist
>   in the flow before the handler is registered.
> 
> - Some ULP entry points are called from the ULP event handler callback,
>   under the handler_mutex. (however ucma never does this)
> 
> - state uses a weird double locking scheme, in most cases one should hold
>   the handler_mutex. (It is somewhat unclear what exactly the spinlock is
>   for)
> 
> - Reading the state without holding the spinlock should use READ_ONCE,
>   even if the handler_mutex is held.
> 
> - There are certain states which are 'stable' under the handler_mutex,
>   exit from that state requires also holding the handler_mutex. This
>   explains why testing the test under only the handler_mutex makes sense.
> 
> Thanks
> 
> Jason Gunthorpe (8):
>   RDMA/cma: Fix locking for the RDMA_CM_CONNECT state
>   RDMA/cma: Make the locking for automatic state transition more clear
>   RDMA/cma: Fix locking for the RDMA_CM_LISTEN state
>   RDMA/cma: Remove cma_comp()
>   RDMA/cma: Combine cma_ndev_work with cma_work
>   RDMA/cma: Remove dead code for kernel rdmacm multicast
>   RDMA/cma: Consolidate the destruction of a cma_multicast in one place
>   RDMA/cma: Fix use after free race in roce multicast join

Applied to for-next

Jason

      parent reply	other threads:[~2020-09-17 12:34 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-02  8:11 [PATCH rdma-next 0/8] Cleanup and fix the CMA state machine Leon Romanovsky
2020-09-02  8:11 ` [PATCH rdma-next 1/8] RDMA/cma: Fix locking for the RDMA_CM_CONNECT state Leon Romanovsky
2020-09-02  8:11 ` [PATCH rdma-next 2/8] RDMA/cma: Make the locking for automatic state transition more clear Leon Romanovsky
2020-09-02  8:11 ` [PATCH rdma-next 3/8] RDMA/cma: Fix locking for the RDMA_CM_LISTEN state Leon Romanovsky
2020-09-02  8:11 ` [PATCH rdma-next 4/8] RDMA/cma: Remove cma_comp() Leon Romanovsky
2020-09-02  8:11 ` [PATCH rdma-next 5/8] RDMA/cma: Combine cma_ndev_work with cma_work Leon Romanovsky
2020-09-02  8:11 ` [PATCH rdma-next 6/8] RDMA/cma: Remove dead code for kernel rdmacm multicast Leon Romanovsky
2020-09-02  8:11 ` [PATCH rdma-next 7/8] RDMA/cma: Consolidate the destruction of a cma_multicast in one place Leon Romanovsky
2020-09-02  8:11 ` [PATCH rdma-next 8/8] RDMA/cma: Fix use after free race in roce multicast join Leon Romanovsky
2020-09-17 12:33 ` Jason Gunthorpe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200917123316.GA113655@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=dledford@redhat.com \
    --cc=eli@dev.mellanox.co.il \
    --cc=leon@kernel.org \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=rolandd@cisco.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.