All of lore.kernel.org
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Gal Pressman <galpress@amazon.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>,
	Doug Ledford <dledford@redhat.com>,
	linux-rdma@vger.kernel.org,
	Alexander Matushevsky <matua@amazon.com>,
	Firas JahJah <firasj@amazon.com>,
	Yossi Leybovich <sleybo@amazon.com>
Subject: Re: [PATCH for-next 4/4] RDMA/efa: CQ notifications
Date: Sun, 5 Sep 2021 10:59:06 +0300	[thread overview]
Message-ID: <YTR4yhTyYi323lqe@unreal> (raw)
In-Reply-To: <9ffde1c4-d748-0091-0d7d-b2e2eb63aa51@amazon.com>

On Sun, Sep 05, 2021 at 10:25:17AM +0300, Gal Pressman wrote:
> On 02/09/2021 18:41, Jason Gunthorpe wrote:
> > On Thu, Sep 02, 2021 at 06:17:45PM +0300, Gal Pressman wrote:
> >> On 02/09/2021 18:10, Jason Gunthorpe wrote:
> >>> On Thu, Sep 02, 2021 at 06:09:39PM +0300, Gal Pressman wrote:
> >>>> On 02/09/2021 16:02, Jason Gunthorpe wrote:
> >>>>> On Thu, Sep 02, 2021 at 10:03:16AM +0300, Gal Pressman wrote:
> >>>>>> On 01/09/2021 18:36, Jason Gunthorpe wrote:
> >>>>>>> On Wed, Sep 01, 2021 at 05:24:43PM +0300, Gal Pressman wrote:
> >>>>>>>> On 01/09/2021 14:57, Jason Gunthorpe wrote:
> >>>>>>>>> On Wed, Sep 01, 2021 at 02:50:42PM +0300, Gal Pressman wrote:
> >>>>>>>>>> On 20/08/2021 21:27, Jason Gunthorpe wrote:
> >>>>>>>>>>> On Wed, Aug 11, 2021 at 06:11:31PM +0300, Gal Pressman wrote:
> >>>>>>>>>>>> diff --git a/drivers/infiniband/hw/efa/efa_main.c b/drivers/infiniband/hw/efa/efa_main.c
> >>>>>>>>>>>> index 417dea5f90cf..29db4dec02f0 100644
> >>>>>>>>>>>> +++ b/drivers/infiniband/hw/efa/efa_main.c
> >>>>>>>>>>>> @@ -67,6 +67,46 @@ static void efa_release_bars(struct efa_dev *dev, int bars_mask)
> >>>>>>>>>>>>      pci_release_selected_regions(pdev, release_bars);
> >>>>>>>>>>>>  }
> >>>>>>>>>>>>
> >>>>>>>>>>>> +static void efa_process_comp_eqe(struct efa_dev *dev, struct efa_admin_eqe *eqe)
> >>>>>>>>>>>> +{
> >>>>>>>>>>>> +    u16 cqn = eqe->u.comp_event.cqn;
> >>>>>>>>>>>> +    struct efa_cq *cq;
> >>>>>>>>>>>> +
> >>>>>>>>>>>> +    cq = xa_load(&dev->cqs_xa, cqn);
> >>>>>>>>>>>> +    if (unlikely(!cq)) {
> >>>>>>>>>>>
> >>>>>>>>>>> This seems unlikely to be correct, what prevents cq from being
> >>>>>>>>>>> destroyed concurrently?
> >>>>>>>>>>>
> >>>>>>>>>>> A comp_handler cannot be running after cq destroy completes.
> >>>>>>>>>>
> >>>>>>>>>> Sorry for the long turnaround, was OOO.
> >>>>>>>>>>
> >>>>>>>>>> The CQ cannot be destroyed until all completion events are acked.
> >>>>>>>>>> https://github.com/linux-rdma/rdma-core/blob/7fd01f0c6799f0ecb99cae03c22cf7ff61ffbf5a/libibverbs/man/ibv_get_cq_event.3#L45
> >>>>>>>>>> https://github.com/linux-rdma/rdma-core/blob/7fd01f0c6799f0ecb99cae03c22cf7ff61ffbf5a/libibverbs/cmd_cq.c#L208
> >>>>>>>>>
> >>>>>>>>> That is something quite different, and in userspace.
> >>>>>>>>>
> >>>>>>>>> What in the kernel prevents tha xa_load and the xa_erase from racing together?
> >>>>>>>>
> >>>>>>>> Good point.
> >>>>>>>> I think we need to surround efa_process_comp_eqe() with an rcu_read_lock() and
> >>>>>>>> have a synchronize_rcu() after removing it from the xarray in
> >>>>>>>> destroy_cq.
> >>>>>>>
> >>>>>>> Try to avoid synchronize_rcu()
> >>>>>>
> >>>>>> I don't see how that's possible?
> >>>>>
> >>>>> Usually people use call_rcu() instead
> >>>>
> >>>> Oh nice, thanks.
> >>>>
> >>>> I think the code would be much simpler using synchronize_rcu(), and the
> >>>> destroy_cq flow is usually on the cold path anyway. I also prefer to be certain
> >>>> that the CQ is freed once the destroy verb returns and not rely on the callback
> >>>> scheduling.
> >>>
> >>> I would not be happy to see synchronize_rcu on uverbs destroy
> >>> functions, it is too easy to DOS the kernel with that.
> >>
> >> OK, but isn't the fact that the uverb can return before the CQ is actually
> >> destroyed problematic?
> > 
> > Yes, you can't allow that, something other than RCU needs to prevent
> > that
> > 
> >> Maybe it's an extreme corner case, but if I created max_cq CQs, destroyed one,
> >> and try to create another one, it is not guaranteed that the create operation
> >> would succeed - even though the destroy has finished.
> > 
> > More importantly a driver cannot call completion callbacks once
> > destroy cq has returned.
> 
> So how is having some kind of synchronization to wait for the call_rcu()
> callback to finish different than using synchronize_rcu()? We'll have to wait
> for the readers to finish before returning.

Why do you need to do anything special in addition to nullify
completion callback which will ensure that no new readers are
coming and call_rcu to make sure that existing readers finished?

Thanks

  reply	other threads:[~2021-09-05  7:59 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-11 15:11 [PATCH for-next 0/4] EFA " Gal Pressman
2021-08-11 15:11 ` [PATCH for-next 1/4] RDMA/efa: Free IRQ vectors on error flow Gal Pressman
2021-08-20 18:34   ` Jason Gunthorpe
2021-08-11 15:11 ` [PATCH for-next 2/4] RDMA/efa: Remove unused cpu field from irq struct Gal Pressman
2021-08-11 15:11 ` [PATCH for-next 3/4] RDMA/efa: Rename vector field in efa_irq struct to irqn Gal Pressman
2021-08-11 15:11 ` [PATCH for-next 4/4] RDMA/efa: CQ notifications Gal Pressman
2021-08-20 18:27   ` Jason Gunthorpe
2021-09-01 11:50     ` Gal Pressman
2021-09-01 11:57       ` Jason Gunthorpe
2021-09-01 14:24         ` Gal Pressman
2021-09-01 15:36           ` Jason Gunthorpe
2021-09-02  7:03             ` Gal Pressman
2021-09-02 13:02               ` Jason Gunthorpe
2021-09-02 15:09                 ` Gal Pressman
2021-09-02 15:10                   ` Jason Gunthorpe
2021-09-02 15:17                     ` Gal Pressman
2021-09-02 15:41                       ` Jason Gunthorpe
2021-09-05  7:25                         ` Gal Pressman
2021-09-05  7:59                           ` Leon Romanovsky [this message]
2021-09-05 10:45                             ` Gal Pressman
2021-09-05 10:54                               ` Leon Romanovsky
2021-09-05 11:05                                 ` Gal Pressman
2021-09-05 13:14                                   ` Leon Romanovsky
2021-09-05 14:36                                     ` Gal Pressman
2021-09-07 11:31                                       ` Jason Gunthorpe
2021-09-09 11:00                                         ` Gal Pressman
2021-08-20 18:36 ` [PATCH for-next 0/4] EFA " Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YTR4yhTyYi323lqe@unreal \
    --to=leon@kernel.org \
    --cc=dledford@redhat.com \
    --cc=firasj@amazon.com \
    --cc=galpress@amazon.com \
    --cc=jgg@nvidia.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=matua@amazon.com \
    --cc=sleybo@amazon.com \
    --subject='Re: [PATCH for-next 4/4] RDMA/efa: CQ notifications' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.