* [PATCH 0/3] NVMe/RDMA patches for 5.8
@ 2020-06-23 14:55 Max Gurtovoy
2020-06-23 14:55 ` [PATCH 1/3] nvme-rdma: use new shared CQ mechanism Max Gurtovoy
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: Max Gurtovoy @ 2020-06-23 14:55 UTC (permalink / raw)
To: sagi, linux-nvme, kbusch, hch
Cc: yaminf, idanb, israelr, shlomin, jgg, Max Gurtovoy, ogerlitz
This series include 2 patches from Yamin that were removed from the
merge window since it caused conflicts between RDMA and Block trees.
It uses a shared CQ API that was merged to RDMA core layer to improve
performance and reduce resource allocation.
The last patch is a fix for the RDMA host.
The series applies cleanly on top of Linus master, since I couldn't fetch
nvme-5.8 branch.
Tests were run on top of Linus master + my fix to mlx5 driver:
"RDMA/mlx5: Fix integrity enabled QP creation".
Max Gurtovoy (1):
nvme-rdma: assign completion vector correctly
Yamin Friedman (2):
nvme-rdma: use new shared CQ mechanism
nvmet-rdma: use new shared CQ mechanism
drivers/nvme/host/rdma.c | 77 ++++++++++++++++++++++++++++++----------------
drivers/nvme/target/rdma.c | 14 ++++-----
2 files changed, 58 insertions(+), 33 deletions(-)
--
1.8.3.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/3] nvme-rdma: use new shared CQ mechanism
2020-06-23 14:55 [PATCH 0/3] NVMe/RDMA patches for 5.8 Max Gurtovoy
@ 2020-06-23 14:55 ` Max Gurtovoy
2020-06-23 14:55 ` [PATCH 2/3] nvmet-rdma: " Max Gurtovoy
2020-06-23 14:55 ` [PATCH 3/3] nvme-rdma: assign completion vector correctly Max Gurtovoy
2 siblings, 0 replies; 14+ messages in thread
From: Max Gurtovoy @ 2020-06-23 14:55 UTC (permalink / raw)
To: sagi, linux-nvme, kbusch, hch
Cc: yaminf, idanb, israelr, shlomin, jgg, Max Gurtovoy, ogerlitz
From: Yamin Friedman <yaminf@mellanox.com>
Has the driver use shared CQs providing ~10%-20% improvement as seen in
the patch introducing shared CQs. Instead of opening a CQ for each QP
per controller connected, a CQ for each QP will be provided by the RDMA
core driver that will be shared between the QPs on that core reducing
interrupt overhead.
Signed-off-by: Yamin Friedman <yaminf@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/host/rdma.c | 77 ++++++++++++++++++++++++++++++++----------------
1 file changed, 51 insertions(+), 26 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index f8f856d..f5d6a57 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -96,6 +96,7 @@ struct nvme_rdma_queue {
int cm_error;
struct completion cm_done;
bool pi_support;
+ int cq_size;
};
struct nvme_rdma_ctrl {
@@ -274,6 +275,7 @@ static int nvme_rdma_create_qp(struct nvme_rdma_queue *queue, const int factor)
init_attr.recv_cq = queue->ib_cq;
if (queue->pi_support)
init_attr.create_flags |= IB_QP_CREATE_INTEGRITY_EN;
+ init_attr.qp_context = queue;
ret = rdma_create_qp(queue->cm_id, dev->pd, &init_attr);
@@ -408,6 +410,14 @@ static int nvme_rdma_dev_get(struct nvme_rdma_device *dev)
return NULL;
}
+static void nvme_rdma_free_cq(struct nvme_rdma_queue *queue)
+{
+ if (nvme_rdma_poll_queue(queue))
+ ib_free_cq(queue->ib_cq);
+ else
+ ib_cq_pool_put(queue->ib_cq, queue->cq_size);
+}
+
static void nvme_rdma_destroy_queue_ib(struct nvme_rdma_queue *queue)
{
struct nvme_rdma_device *dev;
@@ -429,7 +439,7 @@ static void nvme_rdma_destroy_queue_ib(struct nvme_rdma_queue *queue)
* the destruction of the QP shouldn't use rdma_cm API.
*/
ib_destroy_qp(queue->qp);
- ib_free_cq(queue->ib_cq);
+ nvme_rdma_free_cq(queue);
nvme_rdma_free_ring(ibdev, queue->rsp_ring, queue->queue_size,
sizeof(struct nvme_completion), DMA_FROM_DEVICE);
@@ -449,13 +459,42 @@ static int nvme_rdma_get_max_fr_pages(struct ib_device *ibdev, bool pi_support)
return min_t(u32, NVME_RDMA_MAX_SEGMENTS, max_page_list_len - 1);
}
+static int nvme_rdma_create_cq(struct ib_device *ibdev,
+ struct nvme_rdma_queue *queue)
+{
+ int ret, comp_vector, idx = nvme_rdma_queue_idx(queue);
+ enum ib_poll_context poll_ctx;
+
+ /*
+ * Spread I/O queues completion vectors according their queue index.
+ * Admin queues can always go on completion vector 0.
+ */
+ comp_vector = idx == 0 ? idx : idx - 1;
+
+ /* Polling queues need direct cq polling context */
+ if (nvme_rdma_poll_queue(queue)) {
+ poll_ctx = IB_POLL_DIRECT;
+ queue->ib_cq = ib_alloc_cq(ibdev, queue, queue->cq_size,
+ comp_vector, poll_ctx);
+ } else {
+ poll_ctx = IB_POLL_SOFTIRQ;
+ queue->ib_cq = ib_cq_pool_get(ibdev, queue->cq_size,
+ comp_vector, poll_ctx);
+ }
+
+ if (IS_ERR(queue->ib_cq)) {
+ ret = PTR_ERR(queue->ib_cq);
+ return ret;
+ }
+
+ return 0;
+}
+
static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
{
struct ib_device *ibdev;
const int send_wr_factor = 3; /* MR, SEND, INV */
const int cq_factor = send_wr_factor + 1; /* + RECV */
- int comp_vector, idx = nvme_rdma_queue_idx(queue);
- enum ib_poll_context poll_ctx;
int ret, pages_per_mr;
queue->device = nvme_rdma_find_get_device(queue->cm_id);
@@ -466,26 +505,12 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
}
ibdev = queue->device->dev;
- /*
- * Spread I/O queues completion vectors according their queue index.
- * Admin queues can always go on completion vector 0.
- */
- comp_vector = idx == 0 ? idx : idx - 1;
-
- /* Polling queues need direct cq polling context */
- if (nvme_rdma_poll_queue(queue))
- poll_ctx = IB_POLL_DIRECT;
- else
- poll_ctx = IB_POLL_SOFTIRQ;
-
/* +1 for ib_stop_cq */
- queue->ib_cq = ib_alloc_cq(ibdev, queue,
- cq_factor * queue->queue_size + 1,
- comp_vector, poll_ctx);
- if (IS_ERR(queue->ib_cq)) {
- ret = PTR_ERR(queue->ib_cq);
+ queue->cq_size = cq_factor * queue->queue_size + 1;
+
+ ret = nvme_rdma_create_cq(ibdev, queue);
+ if (ret)
goto out_put_dev;
- }
ret = nvme_rdma_create_qp(queue, send_wr_factor);
if (ret)
@@ -511,7 +536,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
if (ret) {
dev_err(queue->ctrl->ctrl.device,
"failed to initialize MR pool sized %d for QID %d\n",
- queue->queue_size, idx);
+ queue->queue_size, nvme_rdma_queue_idx(queue));
goto out_destroy_ring;
}
@@ -522,7 +547,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
if (ret) {
dev_err(queue->ctrl->ctrl.device,
"failed to initialize PI MR pool sized %d for QID %d\n",
- queue->queue_size, idx);
+ queue->queue_size, nvme_rdma_queue_idx(queue));
goto out_destroy_mr_pool;
}
}
@@ -539,7 +564,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
out_destroy_qp:
rdma_destroy_qp(queue->cm_id);
out_destroy_ib_cq:
- ib_free_cq(queue->ib_cq);
+ nvme_rdma_free_cq(queue);
out_put_dev:
nvme_rdma_dev_put(queue->device);
return ret;
@@ -1152,7 +1177,7 @@ static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
static void nvme_rdma_wr_error(struct ib_cq *cq, struct ib_wc *wc,
const char *op)
{
- struct nvme_rdma_queue *queue = cq->cq_context;
+ struct nvme_rdma_queue *queue = wc->qp->qp_context;
struct nvme_rdma_ctrl *ctrl = queue->ctrl;
if (ctrl->ctrl.state == NVME_CTRL_LIVE)
@@ -1705,7 +1730,7 @@ static void nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
{
struct nvme_rdma_qe *qe =
container_of(wc->wr_cqe, struct nvme_rdma_qe, cqe);
- struct nvme_rdma_queue *queue = cq->cq_context;
+ struct nvme_rdma_queue *queue = wc->qp->qp_context;
struct ib_device *ibdev = queue->device->dev;
struct nvme_completion *cqe = qe->data;
const size_t len = sizeof(struct nvme_completion);
--
1.8.3.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/3] nvmet-rdma: use new shared CQ mechanism
2020-06-23 14:55 [PATCH 0/3] NVMe/RDMA patches for 5.8 Max Gurtovoy
2020-06-23 14:55 ` [PATCH 1/3] nvme-rdma: use new shared CQ mechanism Max Gurtovoy
@ 2020-06-23 14:55 ` Max Gurtovoy
2020-06-23 14:55 ` [PATCH 3/3] nvme-rdma: assign completion vector correctly Max Gurtovoy
2 siblings, 0 replies; 14+ messages in thread
From: Max Gurtovoy @ 2020-06-23 14:55 UTC (permalink / raw)
To: sagi, linux-nvme, kbusch, hch
Cc: yaminf, idanb, israelr, shlomin, jgg, Max Gurtovoy, ogerlitz
From: Yamin Friedman <yaminf@mellanox.com>
Has the driver use shared CQs providing ~10%-20% improvement when
multiple disks are used. Instead of opening a CQ for each QP per
controller, a CQ for each core will be provided by the RDMA core driver
that will be shared between the QPs on that core reducing interrupt
overhead.
Signed-off-by: Yamin Friedman <yaminf@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/target/rdma.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 76ea23a..898f6fd 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -752,7 +752,7 @@ static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc)
{
struct nvmet_rdma_rsp *rsp =
container_of(wc->wr_cqe, struct nvmet_rdma_rsp, read_cqe);
- struct nvmet_rdma_queue *queue = cq->cq_context;
+ struct nvmet_rdma_queue *queue = wc->qp->qp_context;
u16 status = 0;
WARN_ON(rsp->n_rdma <= 0);
@@ -1008,7 +1008,7 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
{
struct nvmet_rdma_cmd *cmd =
container_of(wc->wr_cqe, struct nvmet_rdma_cmd, cqe);
- struct nvmet_rdma_queue *queue = cq->cq_context;
+ struct nvmet_rdma_queue *queue = wc->qp->qp_context;
struct nvmet_rdma_rsp *rsp;
if (unlikely(wc->status != IB_WC_SUCCESS)) {
@@ -1258,9 +1258,8 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
*/
nr_cqe = queue->recv_queue_size + 2 * queue->send_queue_size;
- queue->cq = ib_alloc_cq(ndev->device, queue,
- nr_cqe + 1, queue->comp_vector,
- IB_POLL_WORKQUEUE);
+ queue->cq = ib_cq_pool_get(ndev->device, nr_cqe + 1,
+ queue->comp_vector, IB_POLL_WORKQUEUE);
if (IS_ERR(queue->cq)) {
ret = PTR_ERR(queue->cq);
pr_err("failed to create CQ cqe= %d ret= %d\n",
@@ -1322,7 +1321,7 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
err_destroy_qp:
rdma_destroy_qp(queue->cm_id);
err_destroy_cq:
- ib_free_cq(queue->cq);
+ ib_cq_pool_put(queue->cq, nr_cqe + 1);
goto out;
}
@@ -1332,7 +1331,8 @@ static void nvmet_rdma_destroy_queue_ib(struct nvmet_rdma_queue *queue)
if (queue->cm_id)
rdma_destroy_id(queue->cm_id);
ib_destroy_qp(queue->qp);
- ib_free_cq(queue->cq);
+ ib_cq_pool_put(queue->cq, queue->recv_queue_size + 2 *
+ queue->send_queue_size + 1);
}
static void nvmet_rdma_free_queue(struct nvmet_rdma_queue *queue)
--
1.8.3.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-23 14:55 [PATCH 0/3] NVMe/RDMA patches for 5.8 Max Gurtovoy
2020-06-23 14:55 ` [PATCH 1/3] nvme-rdma: use new shared CQ mechanism Max Gurtovoy
2020-06-23 14:55 ` [PATCH 2/3] nvmet-rdma: " Max Gurtovoy
@ 2020-06-23 14:55 ` Max Gurtovoy
2020-06-23 15:22 ` Jason Gunthorpe
2020-06-24 16:41 ` Christoph Hellwig
2 siblings, 2 replies; 14+ messages in thread
From: Max Gurtovoy @ 2020-06-23 14:55 UTC (permalink / raw)
To: sagi, linux-nvme, kbusch, hch
Cc: yaminf, idanb, israelr, shlomin, jgg, Max Gurtovoy, ogerlitz
The completion vector index that is given during CQ creation can't
exceed the number of support vectors by the underlying RDMA device. This
violation currently can accure, for example, in case one will try to
connect with N regular read/write queues and M poll queues and the sum
of N + M > num_supported_vectors. This will lead to failure in establish
a connection to remote target. Instead, in that case, share a completion
vector between queues.
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
drivers/nvme/host/rdma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index f5d6a57..981adbd 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -469,7 +469,7 @@ static int nvme_rdma_create_cq(struct ib_device *ibdev,
* Spread I/O queues completion vectors according their queue index.
* Admin queues can always go on completion vector 0.
*/
- comp_vector = idx == 0 ? idx : idx - 1;
+ comp_vector = (idx == 0 ? idx : idx - 1) % ibdev->num_comp_vectors;
/* Polling queues need direct cq polling context */
if (nvme_rdma_poll_queue(queue)) {
--
1.8.3.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-23 14:55 ` [PATCH 3/3] nvme-rdma: assign completion vector correctly Max Gurtovoy
@ 2020-06-23 15:22 ` Jason Gunthorpe
2020-06-23 17:34 ` Sagi Grimberg
2020-06-24 16:41 ` Christoph Hellwig
1 sibling, 1 reply; 14+ messages in thread
From: Jason Gunthorpe @ 2020-06-23 15:22 UTC (permalink / raw)
To: Max Gurtovoy
Cc: yaminf, sagi, shlomin, israelr, linux-nvme, idanb, kbusch, ogerlitz, hch
On Tue, Jun 23, 2020 at 05:55:25PM +0300, Max Gurtovoy wrote:
> The completion vector index that is given during CQ creation can't
> exceed the number of support vectors by the underlying RDMA device. This
> violation currently can accure, for example, in case one will try to
> connect with N regular read/write queues and M poll queues and the sum
> of N + M > num_supported_vectors. This will lead to failure in establish
> a connection to remote target. Instead, in that case, share a completion
> vector between queues.
That sounds like a RC patch? Where is the fixes line? Why is it in
this series?
Jason
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-23 15:22 ` Jason Gunthorpe
@ 2020-06-23 17:34 ` Sagi Grimberg
2020-06-24 8:34 ` Max Gurtovoy
0 siblings, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2020-06-23 17:34 UTC (permalink / raw)
To: Jason Gunthorpe, Max Gurtovoy
Cc: yaminf, idanb, israelr, linux-nvme, shlomin, kbusch, ogerlitz, hch
>> The completion vector index that is given during CQ creation can't
>> exceed the number of support vectors by the underlying RDMA device. This
>> violation currently can accure, for example, in case one will try to
>> connect with N regular read/write queues and M poll queues and the sum
>> of N + M > num_supported_vectors. This will lead to failure in establish
>> a connection to remote target. Instead, in that case, share a completion
>> vector between queues.
>
> That sounds like a RC patch? Where is the fixes line? Why is it in
> this series?
Agree, this should be sent as a separate patch.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-23 17:34 ` Sagi Grimberg
@ 2020-06-24 8:34 ` Max Gurtovoy
2020-06-24 8:37 ` Christoph Hellwig
0 siblings, 1 reply; 14+ messages in thread
From: Max Gurtovoy @ 2020-06-24 8:34 UTC (permalink / raw)
To: Sagi Grimberg, Jason Gunthorpe
Cc: yaminf, idanb, israelr, linux-nvme, shlomin, kbusch, ogerlitz, hch
On 6/23/2020 8:34 PM, Sagi Grimberg wrote:
>
>>> The completion vector index that is given during CQ creation can't
>>> exceed the number of support vectors by the underlying RDMA device.
>>> This
>>> violation currently can accure, for example, in case one will try to
>>> connect with N regular read/write queues and M poll queues and the sum
>>> of N + M > num_supported_vectors. This will lead to failure in
>>> establish
>>> a connection to remote target. Instead, in that case, share a
>>> completion
>>> vector between queues.
>>
>> That sounds like a RC patch? Where is the fixes line? Why is it in
>> this series?
>
> Agree, this should be sent as a separate patch.
The reason I sent it in 1 series is to avoid conflicts. Since all the
patches can go to nvme-5.8, I tried to make life easier.
We can do it separately of course.
Christoph,
would you like to merge patches 1+2 from this series or should I send
them again as well ?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-24 8:34 ` Max Gurtovoy
@ 2020-06-24 8:37 ` Christoph Hellwig
2020-06-24 8:44 ` Max Gurtovoy
0 siblings, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2020-06-24 8:37 UTC (permalink / raw)
To: Max Gurtovoy
Cc: yaminf, Sagi Grimberg, idanb, israelr, linux-nvme, shlomin,
Jason Gunthorpe, kbusch, ogerlitz, hch
On Wed, Jun 24, 2020 at 11:34:22AM +0300, Max Gurtovoy wrote:
>
> On 6/23/2020 8:34 PM, Sagi Grimberg wrote:
>>
>>>> The completion vector index that is given during CQ creation can't
>>>> exceed the number of support vectors by the underlying RDMA device. This
>>>> violation currently can accure, for example, in case one will try to
>>>> connect with N regular read/write queues and M poll queues and the sum
>>>> of N + M > num_supported_vectors. This will lead to failure in establish
>>>> a connection to remote target. Instead, in that case, share a completion
>>>> vector between queues.
>>>
>>> That sounds like a RC patch? Where is the fixes line? Why is it in
>>> this series?
>>
>> Agree, this should be sent as a separate patch.
>
> The reason I sent it in 1 series is to avoid conflicts. Since all the
> patches can go to nvme-5.8, I tried to make life easier.
>
> We can do it separately of course.
>
> Christoph,
>
> would you like to merge patches 1+2 from this series or should I send them
> again as well ?
I don't think 1+2 are 5.8 material, so please just resend 3 standalone
for now, and then resend 1+2 once I've merged it and rebased nvme-5.9
on top of nvme-5.8.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-24 8:37 ` Christoph Hellwig
@ 2020-06-24 8:44 ` Max Gurtovoy
2020-06-24 8:46 ` Christoph Hellwig
2020-06-24 14:22 ` Jason Gunthorpe
0 siblings, 2 replies; 14+ messages in thread
From: Max Gurtovoy @ 2020-06-24 8:44 UTC (permalink / raw)
To: Christoph Hellwig
Cc: yaminf, Sagi Grimberg, idanb, israelr, linux-nvme, shlomin,
Jason Gunthorpe, kbusch, ogerlitz
On 6/24/2020 11:37 AM, Christoph Hellwig wrote:
> On Wed, Jun 24, 2020 at 11:34:22AM +0300, Max Gurtovoy wrote:
>> On 6/23/2020 8:34 PM, Sagi Grimberg wrote:
>>>>> The completion vector index that is given during CQ creation can't
>>>>> exceed the number of support vectors by the underlying RDMA device. This
>>>>> violation currently can accure, for example, in case one will try to
>>>>> connect with N regular read/write queues and M poll queues and the sum
>>>>> of N + M > num_supported_vectors. This will lead to failure in establish
>>>>> a connection to remote target. Instead, in that case, share a completion
>>>>> vector between queues.
>>>> That sounds like a RC patch? Where is the fixes line? Why is it in
>>>> this series?
>>> Agree, this should be sent as a separate patch.
>> The reason I sent it in 1 series is to avoid conflicts. Since all the
>> patches can go to nvme-5.8, I tried to make life easier.
>>
>> We can do it separately of course.
>>
>> Christoph,
>>
>> would you like to merge patches 1+2 from this series or should I send them
>> again as well ?
> I don't think 1+2 are 5.8 material, so please just resend 3 standalone
> for now, and then resend 1+2 once I've merged it and rebased nvme-5.9
> on top of nvme-5.8.
Ok. Actually 1+2 were aimed to be merged to 5.8 but created a conflict
between Jason's and Jens's trees.
If we go this way it means we can't push new features to RDMA and use it
in NVMf in the same cycle.
Jason,
can we push iSER CQ sharing to kernel-5.8 ?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-24 8:44 ` Max Gurtovoy
@ 2020-06-24 8:46 ` Christoph Hellwig
2020-06-24 14:22 ` Jason Gunthorpe
1 sibling, 0 replies; 14+ messages in thread
From: Christoph Hellwig @ 2020-06-24 8:46 UTC (permalink / raw)
To: Max Gurtovoy
Cc: yaminf, Sagi Grimberg, idanb, israelr, linux-nvme, shlomin,
Jason Gunthorpe, kbusch, ogerlitz, Christoph Hellwig
On Wed, Jun 24, 2020 at 11:44:06AM +0300, Max Gurtovoy wrote:
>> I don't think 1+2 are 5.8 material, so please just resend 3 standalone
>> for now, and then resend 1+2 once I've merged it and rebased nvme-5.9
>> on top of nvme-5.8.
>
> Ok. Actually 1+2 were aimed to be merged to 5.8 but created a conflict
> between Jason's and Jens's trees.
>
> If we go this way it means we can't push new features to RDMA and use it in
> NVMf in the same cycle.
>
> Jason,
>
> can we push iSER CQ sharing to kernel-5.8 ?
We're not going to merge new features in nvme after -rc2. For cross
subsystem coordination either everything needs to go in through one
tree if possible, or we'll need shared branches.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-24 8:44 ` Max Gurtovoy
2020-06-24 8:46 ` Christoph Hellwig
@ 2020-06-24 14:22 ` Jason Gunthorpe
2020-06-24 15:14 ` Max Gurtovoy
1 sibling, 1 reply; 14+ messages in thread
From: Jason Gunthorpe @ 2020-06-24 14:22 UTC (permalink / raw)
To: Max Gurtovoy
Cc: yaminf, Sagi Grimberg, idanb, israelr, linux-nvme, shlomin,
kbusch, ogerlitz, Christoph Hellwig
On Wed, Jun 24, 2020 at 11:44:06AM +0300, Max Gurtovoy wrote:
>
> On 6/24/2020 11:37 AM, Christoph Hellwig wrote:
> > On Wed, Jun 24, 2020 at 11:34:22AM +0300, Max Gurtovoy wrote:
> > > On 6/23/2020 8:34 PM, Sagi Grimberg wrote:
> > > > > > The completion vector index that is given during CQ creation can't
> > > > > > exceed the number of support vectors by the underlying RDMA device. This
> > > > > > violation currently can accure, for example, in case one will try to
> > > > > > connect with N regular read/write queues and M poll queues and the sum
> > > > > > of N + M > num_supported_vectors. This will lead to failure in establish
> > > > > > a connection to remote target. Instead, in that case, share a completion
> > > > > > vector between queues.
> > > > > That sounds like a RC patch? Where is the fixes line? Why is it in
> > > > > this series?
> > > > Agree, this should be sent as a separate patch.
> > > The reason I sent it in 1 series is to avoid conflicts. Since all the
> > > patches can go to nvme-5.8, I tried to make life easier.
> > >
> > > We can do it separately of course.
> > >
> > > Christoph,
> > >
> > > would you like to merge patches 1+2 from this series or should I send them
> > > again as well ?
> > I don't think 1+2 are 5.8 material, so please just resend 3 standalone
> > for now, and then resend 1+2 once I've merged it and rebased nvme-5.9
> > on top of nvme-5.8.
>
> Ok. Actually 1+2 were aimed to be merged to 5.8 but created a conflict
> between Jason's and Jens's trees.
>
> If we go this way it means we can't push new features to RDMA and use it in
> NVMf in the same cycle.
>
> Jason,
>
> can we push iSER CQ sharing to kernel-5.8 ?
I don't think so..
Where are these patches anyhow? I don't see any iser stuff in rdma
patchworks?
If you need a branch for something you should plan it out now.. I can
help organize the branch process for you, but you have to plan it
out :)
Jason
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-24 14:22 ` Jason Gunthorpe
@ 2020-06-24 15:14 ` Max Gurtovoy
0 siblings, 0 replies; 14+ messages in thread
From: Max Gurtovoy @ 2020-06-24 15:14 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: yaminf, Sagi Grimberg, idanb, israelr, linux-nvme, shlomin,
kbusch, ogerlitz, Christoph Hellwig
On 6/24/2020 5:22 PM, Jason Gunthorpe wrote:
> On Wed, Jun 24, 2020 at 11:44:06AM +0300, Max Gurtovoy wrote:
>> On 6/24/2020 11:37 AM, Christoph Hellwig wrote:
>>> On Wed, Jun 24, 2020 at 11:34:22AM +0300, Max Gurtovoy wrote:
>>>> On 6/23/2020 8:34 PM, Sagi Grimberg wrote:
>>>>>>> The completion vector index that is given during CQ creation can't
>>>>>>> exceed the number of support vectors by the underlying RDMA device. This
>>>>>>> violation currently can accure, for example, in case one will try to
>>>>>>> connect with N regular read/write queues and M poll queues and the sum
>>>>>>> of N + M > num_supported_vectors. This will lead to failure in establish
>>>>>>> a connection to remote target. Instead, in that case, share a completion
>>>>>>> vector between queues.
>>>>>> That sounds like a RC patch? Where is the fixes line? Why is it in
>>>>>> this series?
>>>>> Agree, this should be sent as a separate patch.
>>>> The reason I sent it in 1 series is to avoid conflicts. Since all the
>>>> patches can go to nvme-5.8, I tried to make life easier.
>>>>
>>>> We can do it separately of course.
>>>>
>>>> Christoph,
>>>>
>>>> would you like to merge patches 1+2 from this series or should I send them
>>>> again as well ?
>>> I don't think 1+2 are 5.8 material, so please just resend 3 standalone
>>> for now, and then resend 1+2 once I've merged it and rebased nvme-5.9
>>> on top of nvme-5.8.
>> Ok. Actually 1+2 were aimed to be merged to 5.8 but created a conflict
>> between Jason's and Jens's trees.
>>
>> If we go this way it means we can't push new features to RDMA and use it in
>> NVMf in the same cycle.
>>
>> Jason,
>>
>> can we push iSER CQ sharing to kernel-5.8 ?
> I don't think so..
>
> Where are these patches anyhow? I don't see any iser stuff in rdma
> patchworks?
These patches were developed by Yamin and reviewed by me internally
after the merge window of 5.8.
>
> If you need a branch for something you should plan it out now.. I can
> help organize the branch process for you, but you have to plan it
> out :)
I don't think it's necessary for now.
I'll just send iSER patches for review and you can fetch them to
for-next (5.9 merge window).
>
> Jason
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-23 14:55 ` [PATCH 3/3] nvme-rdma: assign completion vector correctly Max Gurtovoy
2020-06-23 15:22 ` Jason Gunthorpe
@ 2020-06-24 16:41 ` Christoph Hellwig
2020-06-25 8:11 ` Max Gurtovoy
1 sibling, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2020-06-24 16:41 UTC (permalink / raw)
To: Max Gurtovoy
Cc: yaminf, sagi, shlomin, israelr, linux-nvme, idanb, jgg, kbusch,
ogerlitz, hch
Applied patch 3 to nvme-5.8.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] nvme-rdma: assign completion vector correctly
2020-06-24 16:41 ` Christoph Hellwig
@ 2020-06-25 8:11 ` Max Gurtovoy
0 siblings, 0 replies; 14+ messages in thread
From: Max Gurtovoy @ 2020-06-25 8:11 UTC (permalink / raw)
To: Christoph Hellwig
Cc: yaminf, sagi, shlomin, israelr, linux-nvme, idanb, jgg, kbusch, ogerlitz
Can you add fixes line manually please ?
Fixes: b65bb777ef223 ("nvme-rdma: support separate queue maps for read
and write")
On 6/24/2020 7:41 PM, Christoph Hellwig wrote:
> Applied patch 3 to nvme-5.8.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2020-06-25 8:11 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-23 14:55 [PATCH 0/3] NVMe/RDMA patches for 5.8 Max Gurtovoy
2020-06-23 14:55 ` [PATCH 1/3] nvme-rdma: use new shared CQ mechanism Max Gurtovoy
2020-06-23 14:55 ` [PATCH 2/3] nvmet-rdma: " Max Gurtovoy
2020-06-23 14:55 ` [PATCH 3/3] nvme-rdma: assign completion vector correctly Max Gurtovoy
2020-06-23 15:22 ` Jason Gunthorpe
2020-06-23 17:34 ` Sagi Grimberg
2020-06-24 8:34 ` Max Gurtovoy
2020-06-24 8:37 ` Christoph Hellwig
2020-06-24 8:44 ` Max Gurtovoy
2020-06-24 8:46 ` Christoph Hellwig
2020-06-24 14:22 ` Jason Gunthorpe
2020-06-24 15:14 ` Max Gurtovoy
2020-06-24 16:41 ` Christoph Hellwig
2020-06-25 8:11 ` Max Gurtovoy
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.