All of lore.kernel.org
 help / color / mirror / Atom feed
From: parav@mellanox.com (Parav Pandit)
Subject: [PATCH] nvmet: Avoid writing fabric_ops, queue pointers on every request.
Date: Mon, 13 Feb 2017 16:12:49 +0000	[thread overview]
Message-ID: <VI1PR0502MB3008119A87F7058CAA4B22C0D1590@VI1PR0502MB3008.eurprd05.prod.outlook.com> (raw)
In-Reply-To: 700fde5d-3b84-2279-4176-c871364d4b97@grimberg.me

Hi James,

Does this change look fine for FC?
I do not have infrastructure to test the FC side.

Parav

> -----Original Message-----
> From: Parav Pandit
> Sent: Wednesday, February 8, 2017 2:03 PM
> To: 'Sagi Grimberg' <sagi at grimberg.me>; hch at lst.de;
> james.smart at broadcom.com; linux-nvme at lists.infradead.org
> Subject: RE: [PATCH] nvmet: Avoid writing fabric_ops, queue pointers on
> every request.
> 
> Hi Sagi,
> 
> > -----Original Message-----
> > From: Sagi Grimberg [mailto:sagi at grimberg.me]
> > Sent: Wednesday, February 8, 2017 12:18 PM
> > To: Parav Pandit <parav at mellanox.com>; hch at lst.de;
> > james.smart at broadcom.com; linux-nvme at lists.infradead.org
> > Subject: Re: [PATCH] nvmet: Avoid writing fabric_ops, queue pointers
> > on every request.
> >
> >
> > >>> Additionally this patch further avoid nvme cq and sq pointer
> > >>> initialization for every request during every request processing
> > >>> for rdma because nvme queue linking occurs during queue allocation
> > >>> time for AQ and IOQ.
> > >>
> > >> This breaks SRQ mode where every nvmet_rdma_cmd serves different
> > >> queues in it's lifetime..
> > >
> > > I fail to understand that.
> > > nvmet_rdma_create_queue_ib() is call for as many QPs as we create;
> > > not
> > based on number of SRQs we create.
> >
> > Correct.
> >
> > > nvmet_rdma_queue stores cq and sq.
> >
> > Correct.
> >
> > > So there are as many cq and sq on the fabric side as QPs for
> > > fabric_connect
> > command is called.
> > > queue is pulled out of cq context on which we received the command.
> > > SRQ is just a place shared among this nvme queues to share the RQ
> > > buffer,
> > right?
> >
> > Correct too, but we then assign the queue to the command, which is the
> > context of the received SQE (maybe with in-capsule data). For the SRQ
> > case we allocate the commands and pre-post them (before we have any
> > queues), they are absolutely not bound to a given queue, they can't
> actually.
> >
> > So for each new recv completion, the command context is now bound to
> > the queue that it completed on, so it can be bound to different queues
> > in its life- time.
> 
> Sorry, I am still not getting it.
> nvmet_rdma_rsp structure contains nvmet_req.
> nvmet_rdma_rsp (and so nvmet_req) are per QP allocations.
> cq and sq pointers are initialized inside nvmet_req.
> 
> nvmet_rdma_cmd is per RQ/SRQ allocation.
> When we do recv_done(), we bind rsp structure to cmd (cmd can be from RQ
> or SRQ).
> So I believe this is still good.
> If we have cq and sq pointer inside the nvmet_rdma_cmd than I can
> understand that it can break.
> 
> On a side note: I tested the patch with use_srq flag (but didn't publish its
> performance numbers as they awaiting your per core SRQ fixes to match
> regular RQ numbers :-) ).
> 
> 

      parent reply	other threads:[~2017-02-13 16:12 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-07 22:37 [PATCH] nvmet: Avoid writing fabric_ops, queue pointers on every request Parav Pandit
2017-02-08 10:13 ` Sagi Grimberg
2017-02-08 18:06   ` Parav Pandit
2017-02-08 18:18     ` Sagi Grimberg
2017-02-08 20:02       ` Parav Pandit
2017-02-13 16:11       ` Parav Pandit
2017-02-15  8:58         ` Sagi Grimberg
2017-02-15 16:15           ` hch
2017-02-15 16:28             ` Sagi Grimberg
2017-02-15 16:31               ` Parav Pandit
2017-02-15 16:34                 ` hch
2017-02-15 16:54                   ` Parav Pandit
2017-02-13 16:12       ` Parav Pandit [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=VI1PR0502MB3008119A87F7058CAA4B22C0D1590@VI1PR0502MB3008.eurprd05.prod.outlook.com \
    --to=parav@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.