All of lore.kernel.org
 help / color / mirror / Atom feed
From: parav@mellanox.com (Parav Pandit)
Subject: [PATCH] nvmet: Avoid writing fabric_ops, queue pointers on every request.
Date: Wed, 8 Feb 2017 18:06:52 +0000	[thread overview]
Message-ID: <VI1PR0502MB3008C2598C5492B2EB4D1E55D1420@VI1PR0502MB3008.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <5bcb5982-36a6-67f5-8416-45e321235fa9@grimberg.me>

Hi Sagi,

> -----Original Message-----
> From: Sagi Grimberg [mailto:sagi at grimberg.me]
> Sent: Wednesday, February 8, 2017 4:13 AM
> To: Parav Pandit <parav at mellanox.com>; hch at lst.de;
> james.smart at broadcom.com; linux-nvme at lists.infradead.org
> Subject: Re: [PATCH] nvmet: Avoid writing fabric_ops, queue pointers on
> every request.
> 
> 
> 
> On 08/02/17 00:37, Parav Pandit wrote:
> > Fabric operations are constants of a registered transport. They don't
> > change with every target request that gets processed by the nvmet-core.
> > Therefore this patch moves fabrics_ops initialization out of the hot
> > request processing path for rdma and fc.
> > It continues to remain in same way for loop target through extra API.
> 
> Can't you add it to nvme_loop_init_iod()?
I didn't review that option when I first did it. I look it now and I believe it can be moved there.
I was under assumption that init_request() is done on every request processing and if that's the case we are not currently touching nvmet_loop_queue, so that minor overhead can be avoided by continue to do this in current location.

> 
> > Additionally this patch further avoid nvme cq and sq pointer
> > initialization for every request during every request processing for
> > rdma because nvme queue linking occurs during queue allocation time
> > for AQ and IOQ.
> 
> This breaks SRQ mode where every nvmet_rdma_cmd serves different
> queues in it's lifetime..

I fail to understand that.
nvmet_rdma_create_queue_ib() is call for as many QPs as we create; not based on number of SRQs we create.
nvmet_rdma_queue stores cq and sq.
So there are as many cq and sq on the fabric side as QPs for fabric_connect command is called.
queue is pulled out of cq context on which we received the command.
SRQ is just a place shared among this nvme queues to share the RQ buffer, right?
What did I miss?

  reply	other threads:[~2017-02-08 18:06 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-07 22:37 [PATCH] nvmet: Avoid writing fabric_ops, queue pointers on every request Parav Pandit
2017-02-08 10:13 ` Sagi Grimberg
2017-02-08 18:06   ` Parav Pandit [this message]
2017-02-08 18:18     ` Sagi Grimberg
2017-02-08 20:02       ` Parav Pandit
2017-02-13 16:11       ` Parav Pandit
2017-02-15  8:58         ` Sagi Grimberg
2017-02-15 16:15           ` hch
2017-02-15 16:28             ` Sagi Grimberg
2017-02-15 16:31               ` Parav Pandit
2017-02-15 16:34                 ` hch
2017-02-15 16:54                   ` Parav Pandit
2017-02-13 16:12       ` Parav Pandit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=VI1PR0502MB3008C2598C5492B2EB4D1E55D1420@VI1PR0502MB3008.eurprd05.prod.outlook.com \
    --to=parav@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.