From: Max Gurtovoy <maxg@mellanox.com>
To: Sagi Grimberg <sagi@grimberg.me>,
<linux-nvme@lists.infradead.org>, <hch@lst.de>,
<loberman@redhat.com>, <bvanassche@acm.org>,
<linux-rdma@vger.kernel.org>
Cc: rgirase@redhat.com, vladimirk@mellanox.com, shlomin@mellanox.com,
leonro@mellanox.com, dledford@redhat.com, jgg@mellanox.com,
oren@mellanox.com, kbusch@kernel.org, idanb@mellanox.com
Subject: Re: [PATCH v2 1/5] IB/core: add a simple SRQ pool per PD
Date: Fri, 20 Mar 2020 15:21:42 +0200 [thread overview]
Message-ID: <bfdb2827-84c6-3053-6191-76e1fff84445@mellanox.com> (raw)
In-Reply-To: <b37caf65-a084-6ed2-2ee9-8a51a6e9b79d@grimberg.me>
On 3/20/2020 7:59 AM, Sagi Grimberg wrote:
>
>> ULP's can use this API to create/destroy SRQ's with the same
>> characteristics for implementing a logic that aimed to save resources
>> without significant performance penalty (e.g. create SRQ per completion
>> vector and use shared receive buffers for multiple controllers of the
>> ULP).
>
> There is almost no logic in here. Is there a real point in having
> in the way it is?
>
> What is the point of creating a pool, getting all the srqs, manage
> in the ulp (in an array), putting back, and destroy as a pool?
>
> I'd expect to have a refcount for each qp referencing a srq from the
> pool, and also that the pool would manage the srqs themselves.
>
> srqs are long lived resources, unlike mrs which are taken and restored
> to the pool on a per I/O basis...
>
> Its not that I hate it or something, just not clear to me how useful it
> is to have in this form...
Sagi,
It's surprising to me since in my V1 two years ago I sent a pure
nvmet/RDMA implementation and no srq_pool in there.
And you have asked to add a srq_pool in the review.
Also I was asked to add another implementation with this API for another
ULP back then and I didn't have the capacity for it.
Now I've done both NVMf and SRP target implementation with the SRQ pool.
I'm ok with removing/upgrading the pool in a way everyone would be happy.
I'm ok with removing the SRP implementation if it's not needed.
I just want to add this feature to NVMf target 5.7 release.
So please decide on the implementation and I'll send the patches.
-Max.
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-03-20 13:22 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-18 15:02 [PATCH v2 0/5] nvmet-rdma/srpt: SRQ per completion vector Max Gurtovoy
2020-03-18 15:02 ` [PATCH v2 1/5] IB/core: add a simple SRQ pool per PD Max Gurtovoy
2020-03-20 5:59 ` Sagi Grimberg
2020-03-20 13:21 ` Max Gurtovoy [this message]
2020-03-20 14:27 ` Leon Romanovsky
2020-03-18 15:02 ` [PATCH v2 2/5] nvmet-rdma: add srq pointer to rdma_cmd Max Gurtovoy
2020-03-18 23:32 ` Jason Gunthorpe
2020-03-19 8:48 ` Max Gurtovoy
2020-03-19 9:14 ` Leon Romanovsky
2020-03-19 10:55 ` Max Gurtovoy
2020-03-19 11:54 ` Jason Gunthorpe
2020-03-19 14:08 ` Konstantin Ryabitsev
2020-03-19 21:58 ` Konstantin Ryabitsev
2020-03-19 4:05 ` Bart Van Assche
2020-03-18 15:02 ` [PATCH v2 3/5] nvmet-rdma: use SRQ per completion vector Max Gurtovoy
2020-03-19 4:09 ` Bart Van Assche
2020-03-19 9:15 ` Max Gurtovoy
2020-03-19 11:56 ` Jason Gunthorpe
2020-03-19 12:48 ` Max Gurtovoy
2020-03-19 13:53 ` Jason Gunthorpe
2020-03-19 14:49 ` Bart Van Assche
[not found] ` <50dd8f5d-d092-54bc-236d-1e702fb95240@mellanox.com>
[not found] ` <6e3cc1c4-b24e-f607-42b3-5b83dd8c312c@mellanox.com>
2020-03-19 16:27 ` Max Gurtovoy
2020-03-20 5:47 ` Sagi Grimberg
2020-03-18 15:02 ` [PATCH v2 4/5] RDMA/srpt: use ib_alloc_cq instead of ib_alloc_cq_any Max Gurtovoy
2020-03-19 4:15 ` Bart Van Assche
2020-03-18 15:02 ` [PATCH v2 5/5] RDMA/srpt: use SRQ per completion vector Max Gurtovoy
2020-03-19 4:20 ` Bart Van Assche
2020-03-19 4:02 ` [PATCH v2 0/5] nvmet-rdma/srpt: " Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bfdb2827-84c6-3053-6191-76e1fff84445@mellanox.com \
--to=maxg@mellanox.com \
--cc=bvanassche@acm.org \
--cc=dledford@redhat.com \
--cc=hch@lst.de \
--cc=idanb@mellanox.com \
--cc=jgg@mellanox.com \
--cc=kbusch@kernel.org \
--cc=leonro@mellanox.com \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-rdma@vger.kernel.org \
--cc=loberman@redhat.com \
--cc=oren@mellanox.com \
--cc=rgirase@redhat.com \
--cc=sagi@grimberg.me \
--cc=shlomin@mellanox.com \
--cc=vladimirk@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).