* [PATCH] nvme-fc: clear q_live at beginning of association teardown
@ 2021-05-11 4:56 ` James Smart
0 siblings, 0 replies; 14+ messages in thread
From: James Smart @ 2021-05-11 4:56 UTC (permalink / raw)
To: linux-nvme; +Cc: James Smart, stable
The __nvmf_check_ready() routine used to bounce all filesystem io if
the controller state isn't LIVE. However, a later patch changed the
logic so that it rejection ends up being based on the Q live check.
The fc transport has a slightly different sequence from rdma and tcp
for shutting down queues/marking them non-live. FC marks its queue
non-live after aborting all ios and waiting for their termination,
leaving a rather large window for filesystem io to continue to hit the
transport. Unfortunately this resulted in filesystem io or applications
seeing I/O errors.
Change the fc transport to mark the queues non-live at the first
sign of teardown for the association (when i/o is initially terminated).
Fixes: 73a5379937ec ("nvme-fabrics: allow to queue requests for live queues")
Cc: <stable@vger.kernel.org> # v5.8+
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
stable trees for 5.8 and 5.9 will require a slightly modified patch
---
drivers/nvme/host/fc.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index d9ab9e7871d0..256e87721a01 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2461,6 +2461,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
static void
__nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
{
+ int q;
+
+ /*
+ * if aborting io, the queues are no longer good, mark them
+ * all as not live.
+ */
+ if (ctrl->ctrl.queue_count > 1) {
+ for (q = 1; q < ctrl->ctrl.queue_count; q++)
+ clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags);
+ }
+ clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags);
+
/*
* If io queues are present, stop them and terminate all outstanding
* ios on them. As FC allocates FC exchange for each io, the
--
2.26.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH] nvme-fc: clear q_live at beginning of association teardown
@ 2021-05-11 4:56 ` James Smart
0 siblings, 0 replies; 14+ messages in thread
From: James Smart @ 2021-05-11 4:56 UTC (permalink / raw)
To: linux-nvme; +Cc: James Smart, stable
The __nvmf_check_ready() routine used to bounce all filesystem io if
the controller state isn't LIVE. However, a later patch changed the
logic so that it rejection ends up being based on the Q live check.
The fc transport has a slightly different sequence from rdma and tcp
for shutting down queues/marking them non-live. FC marks its queue
non-live after aborting all ios and waiting for their termination,
leaving a rather large window for filesystem io to continue to hit the
transport. Unfortunately this resulted in filesystem io or applications
seeing I/O errors.
Change the fc transport to mark the queues non-live at the first
sign of teardown for the association (when i/o is initially terminated).
Fixes: 73a5379937ec ("nvme-fabrics: allow to queue requests for live queues")
Cc: <stable@vger.kernel.org> # v5.8+
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
stable trees for 5.8 and 5.9 will require a slightly modified patch
---
drivers/nvme/host/fc.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index d9ab9e7871d0..256e87721a01 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2461,6 +2461,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
static void
__nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
{
+ int q;
+
+ /*
+ * if aborting io, the queues are no longer good, mark them
+ * all as not live.
+ */
+ if (ctrl->ctrl.queue_count > 1) {
+ for (q = 1; q < ctrl->ctrl.queue_count; q++)
+ clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags);
+ }
+ clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags);
+
/*
* If io queues are present, stop them and terminate all outstanding
* ios on them. As FC allocates FC exchange for each io, the
--
2.26.2
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
2021-05-11 4:56 ` James Smart
@ 2021-05-12 23:03 ` Sagi Grimberg
-1 siblings, 0 replies; 14+ messages in thread
From: Sagi Grimberg @ 2021-05-12 23:03 UTC (permalink / raw)
To: James Smart, linux-nvme; +Cc: stable
> The __nvmf_check_ready() routine used to bounce all filesystem io if
> the controller state isn't LIVE. However, a later patch changed the
> logic so that it rejection ends up being based on the Q live check.
> The fc transport has a slightly different sequence from rdma and tcp
> for shutting down queues/marking them non-live. FC marks its queue
> non-live after aborting all ios and waiting for their termination,
> leaving a rather large window for filesystem io to continue to hit the
> transport. Unfortunately this resulted in filesystem io or applications
> seeing I/O errors.
>
> Change the fc transport to mark the queues non-live at the first
> sign of teardown for the association (when i/o is initially terminated).
Sounds like the correct behavior to me, what is the motivation for doing
that only after all I/O was aborted?
And,
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
@ 2021-05-12 23:03 ` Sagi Grimberg
0 siblings, 0 replies; 14+ messages in thread
From: Sagi Grimberg @ 2021-05-12 23:03 UTC (permalink / raw)
To: James Smart, linux-nvme; +Cc: stable
> The __nvmf_check_ready() routine used to bounce all filesystem io if
> the controller state isn't LIVE. However, a later patch changed the
> logic so that it rejection ends up being based on the Q live check.
> The fc transport has a slightly different sequence from rdma and tcp
> for shutting down queues/marking them non-live. FC marks its queue
> non-live after aborting all ios and waiting for their termination,
> leaving a rather large window for filesystem io to continue to hit the
> transport. Unfortunately this resulted in filesystem io or applications
> seeing I/O errors.
>
> Change the fc transport to mark the queues non-live at the first
> sign of teardown for the association (when i/o is initially terminated).
Sounds like the correct behavior to me, what is the motivation for doing
that only after all I/O was aborted?
And,
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
2021-05-11 4:56 ` James Smart
@ 2021-05-13 14:42 ` Himanshu Madhani
-1 siblings, 0 replies; 14+ messages in thread
From: Himanshu Madhani @ 2021-05-13 14:42 UTC (permalink / raw)
To: James Smart, linux-nvme; +Cc: stable
On 5/10/21 11:56 PM, James Smart wrote:
> The __nvmf_check_ready() routine used to bounce all filesystem io if
> the controller state isn't LIVE. However, a later patch changed the
> logic so that it rejection ends up being based on the Q live check.
> The fc transport has a slightly different sequence from rdma and tcp
> for shutting down queues/marking them non-live. FC marks its queue
> non-live after aborting all ios and waiting for their termination,
> leaving a rather large window for filesystem io to continue to hit the
> transport. Unfortunately this resulted in filesystem io or applications
> seeing I/O errors.
>
> Change the fc transport to mark the queues non-live at the first
> sign of teardown for the association (when i/o is initially terminated).
>
> Fixes: 73a5379937ec ("nvme-fabrics: allow to queue requests for live queues")
> Cc: <stable@vger.kernel.org> # v5.8+
> Signed-off-by: James Smart <jsmart2021@gmail.com>
>
> ---
> stable trees for 5.8 and 5.9 will require a slightly modified patch
> ---
> drivers/nvme/host/fc.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index d9ab9e7871d0..256e87721a01 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2461,6 +2461,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
> static void
> __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
> {
> + int q;
> +
> + /*
> + * if aborting io, the queues are no longer good, mark them
> + * all as not live.
> + */
> + if (ctrl->ctrl.queue_count > 1) {
> + for (q = 1; q < ctrl->ctrl.queue_count; q++)
> + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags);
> + }
> + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags);
> +
> /*
> * If io queues are present, stop them and terminate all outstanding
> * ios on them. As FC allocates FC exchange for each io, the
>
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
--
Himanshu Madhani Oracle Linux Engineering
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
@ 2021-05-13 14:42 ` Himanshu Madhani
0 siblings, 0 replies; 14+ messages in thread
From: Himanshu Madhani @ 2021-05-13 14:42 UTC (permalink / raw)
To: James Smart, linux-nvme; +Cc: stable
On 5/10/21 11:56 PM, James Smart wrote:
> The __nvmf_check_ready() routine used to bounce all filesystem io if
> the controller state isn't LIVE. However, a later patch changed the
> logic so that it rejection ends up being based on the Q live check.
> The fc transport has a slightly different sequence from rdma and tcp
> for shutting down queues/marking them non-live. FC marks its queue
> non-live after aborting all ios and waiting for their termination,
> leaving a rather large window for filesystem io to continue to hit the
> transport. Unfortunately this resulted in filesystem io or applications
> seeing I/O errors.
>
> Change the fc transport to mark the queues non-live at the first
> sign of teardown for the association (when i/o is initially terminated).
>
> Fixes: 73a5379937ec ("nvme-fabrics: allow to queue requests for live queues")
> Cc: <stable@vger.kernel.org> # v5.8+
> Signed-off-by: James Smart <jsmart2021@gmail.com>
>
> ---
> stable trees for 5.8 and 5.9 will require a slightly modified patch
> ---
> drivers/nvme/host/fc.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index d9ab9e7871d0..256e87721a01 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2461,6 +2461,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
> static void
> __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
> {
> + int q;
> +
> + /*
> + * if aborting io, the queues are no longer good, mark them
> + * all as not live.
> + */
> + if (ctrl->ctrl.queue_count > 1) {
> + for (q = 1; q < ctrl->ctrl.queue_count; q++)
> + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags);
> + }
> + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags);
> +
> /*
> * If io queues are present, stop them and terminate all outstanding
> * ios on them. As FC allocates FC exchange for each io, the
>
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
--
Himanshu Madhani Oracle Linux Engineering
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
2021-05-12 23:03 ` Sagi Grimberg
@ 2021-05-13 18:47 ` James Smart
-1 siblings, 0 replies; 14+ messages in thread
From: James Smart @ 2021-05-13 18:47 UTC (permalink / raw)
To: Sagi Grimberg, linux-nvme; +Cc: stable
On 5/12/2021 4:03 PM, Sagi Grimberg wrote:
>
>> The __nvmf_check_ready() routine used to bounce all filesystem io if
>> the controller state isn't LIVE. However, a later patch changed the
>> logic so that it rejection ends up being based on the Q live check.
>> The fc transport has a slightly different sequence from rdma and tcp
>> for shutting down queues/marking them non-live. FC marks its queue
>> non-live after aborting all ios and waiting for their termination,
>> leaving a rather large window for filesystem io to continue to hit the
>> transport. Unfortunately this resulted in filesystem io or applications
>> seeing I/O errors.
>>
>> Change the fc transport to mark the queues non-live at the first
>> sign of teardown for the association (when i/o is initially terminated).
>
> Sounds like the correct behavior to me, what is the motivation for doing
> that only after all I/O was aborted?
>
> And,
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
source evolution over time (rdma/tcp changed how they worked) and the
need didn't show up earlier based on the earlier checks.
-- james
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
@ 2021-05-13 18:47 ` James Smart
0 siblings, 0 replies; 14+ messages in thread
From: James Smart @ 2021-05-13 18:47 UTC (permalink / raw)
To: Sagi Grimberg, linux-nvme; +Cc: stable
On 5/12/2021 4:03 PM, Sagi Grimberg wrote:
>
>> The __nvmf_check_ready() routine used to bounce all filesystem io if
>> the controller state isn't LIVE. However, a later patch changed the
>> logic so that it rejection ends up being based on the Q live check.
>> The fc transport has a slightly different sequence from rdma and tcp
>> for shutting down queues/marking them non-live. FC marks its queue
>> non-live after aborting all ios and waiting for their termination,
>> leaving a rather large window for filesystem io to continue to hit the
>> transport. Unfortunately this resulted in filesystem io or applications
>> seeing I/O errors.
>>
>> Change the fc transport to mark the queues non-live at the first
>> sign of teardown for the association (when i/o is initially terminated).
>
> Sounds like the correct behavior to me, what is the motivation for doing
> that only after all I/O was aborted?
>
> And,
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
source evolution over time (rdma/tcp changed how they worked) and the
need didn't show up earlier based on the earlier checks.
-- james
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
2021-05-13 18:47 ` James Smart
@ 2021-05-13 19:59 ` Sagi Grimberg
-1 siblings, 0 replies; 14+ messages in thread
From: Sagi Grimberg @ 2021-05-13 19:59 UTC (permalink / raw)
To: James Smart, linux-nvme; +Cc: stable
>>> The __nvmf_check_ready() routine used to bounce all filesystem io if
>>> the controller state isn't LIVE. However, a later patch changed the
>>> logic so that it rejection ends up being based on the Q live check.
>>> The fc transport has a slightly different sequence from rdma and tcp
>>> for shutting down queues/marking them non-live. FC marks its queue
>>> non-live after aborting all ios and waiting for their termination,
>>> leaving a rather large window for filesystem io to continue to hit the
>>> transport. Unfortunately this resulted in filesystem io or applications
>>> seeing I/O errors.
>>>
>>> Change the fc transport to mark the queues non-live at the first
>>> sign of teardown for the association (when i/o is initially terminated).
>>
>> Sounds like the correct behavior to me, what is the motivation for doing
>> that only after all I/O was aborted?
>>
>> And,
>> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
>
> source evolution over time (rdma/tcp changed how they worked) and the
> need didn't show up earlier based on the earlier checks.
Makes sense...
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
@ 2021-05-13 19:59 ` Sagi Grimberg
0 siblings, 0 replies; 14+ messages in thread
From: Sagi Grimberg @ 2021-05-13 19:59 UTC (permalink / raw)
To: James Smart, linux-nvme; +Cc: stable
>>> The __nvmf_check_ready() routine used to bounce all filesystem io if
>>> the controller state isn't LIVE. However, a later patch changed the
>>> logic so that it rejection ends up being based on the Q live check.
>>> The fc transport has a slightly different sequence from rdma and tcp
>>> for shutting down queues/marking them non-live. FC marks its queue
>>> non-live after aborting all ios and waiting for their termination,
>>> leaving a rather large window for filesystem io to continue to hit the
>>> transport. Unfortunately this resulted in filesystem io or applications
>>> seeing I/O errors.
>>>
>>> Change the fc transport to mark the queues non-live at the first
>>> sign of teardown for the association (when i/o is initially terminated).
>>
>> Sounds like the correct behavior to me, what is the motivation for doing
>> that only after all I/O was aborted?
>>
>> And,
>> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
>
> source evolution over time (rdma/tcp changed how they worked) and the
> need didn't show up earlier based on the earlier checks.
Makes sense...
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
2021-05-11 4:56 ` James Smart
@ 2021-05-14 9:20 ` Hannes Reinecke
-1 siblings, 0 replies; 14+ messages in thread
From: Hannes Reinecke @ 2021-05-14 9:20 UTC (permalink / raw)
To: James Smart, linux-nvme; +Cc: stable
On 5/11/21 6:56 AM, James Smart wrote:
> The __nvmf_check_ready() routine used to bounce all filesystem io if
> the controller state isn't LIVE. However, a later patch changed the
> logic so that it rejection ends up being based on the Q live check.
> The fc transport has a slightly different sequence from rdma and tcp
> for shutting down queues/marking them non-live. FC marks its queue
> non-live after aborting all ios and waiting for their termination,
> leaving a rather large window for filesystem io to continue to hit the
> transport. Unfortunately this resulted in filesystem io or applications
> seeing I/O errors.
>
> Change the fc transport to mark the queues non-live at the first
> sign of teardown for the association (when i/o is initially terminated).
>
> Fixes: 73a5379937ec ("nvme-fabrics: allow to queue requests for live queues")
> Cc: <stable@vger.kernel.org> # v5.8+
> Signed-off-by: James Smart <jsmart2021@gmail.com>
>
> ---
> stable trees for 5.8 and 5.9 will require a slightly modified patch
> ---
> drivers/nvme/host/fc.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index d9ab9e7871d0..256e87721a01 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2461,6 +2461,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
> static void
> __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
> {
> + int q;
> +
> + /*
> + * if aborting io, the queues are no longer good, mark them
> + * all as not live.
> + */
> + if (ctrl->ctrl.queue_count > 1) {
> + for (q = 1; q < ctrl->ctrl.queue_count; q++)
> + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags);
> + }
> + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags);
> +
> /*
> * If io queues are present, stop them and terminate all outstanding
> * ios on them. As FC allocates FC exchange for each io, the
>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
@ 2021-05-14 9:20 ` Hannes Reinecke
0 siblings, 0 replies; 14+ messages in thread
From: Hannes Reinecke @ 2021-05-14 9:20 UTC (permalink / raw)
To: James Smart, linux-nvme; +Cc: stable
On 5/11/21 6:56 AM, James Smart wrote:
> The __nvmf_check_ready() routine used to bounce all filesystem io if
> the controller state isn't LIVE. However, a later patch changed the
> logic so that it rejection ends up being based on the Q live check.
> The fc transport has a slightly different sequence from rdma and tcp
> for shutting down queues/marking them non-live. FC marks its queue
> non-live after aborting all ios and waiting for their termination,
> leaving a rather large window for filesystem io to continue to hit the
> transport. Unfortunately this resulted in filesystem io or applications
> seeing I/O errors.
>
> Change the fc transport to mark the queues non-live at the first
> sign of teardown for the association (when i/o is initially terminated).
>
> Fixes: 73a5379937ec ("nvme-fabrics: allow to queue requests for live queues")
> Cc: <stable@vger.kernel.org> # v5.8+
> Signed-off-by: James Smart <jsmart2021@gmail.com>
>
> ---
> stable trees for 5.8 and 5.9 will require a slightly modified patch
> ---
> drivers/nvme/host/fc.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index d9ab9e7871d0..256e87721a01 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2461,6 +2461,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
> static void
> __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
> {
> + int q;
> +
> + /*
> + * if aborting io, the queues are no longer good, mark them
> + * all as not live.
> + */
> + if (ctrl->ctrl.queue_count > 1) {
> + for (q = 1; q < ctrl->ctrl.queue_count; q++)
> + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags);
> + }
> + clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags);
> +
> /*
> * If io queues are present, stop them and terminate all outstanding
> * ios on them. As FC allocates FC exchange for each io, the
>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
2021-05-11 4:56 ` James Smart
@ 2021-05-19 6:42 ` Christoph Hellwig
-1 siblings, 0 replies; 14+ messages in thread
From: Christoph Hellwig @ 2021-05-19 6:42 UTC (permalink / raw)
To: James Smart; +Cc: linux-nvme, stable
Thanks,
applied to nvme-5.13.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] nvme-fc: clear q_live at beginning of association teardown
@ 2021-05-19 6:42 ` Christoph Hellwig
0 siblings, 0 replies; 14+ messages in thread
From: Christoph Hellwig @ 2021-05-19 6:42 UTC (permalink / raw)
To: James Smart; +Cc: linux-nvme, stable
Thanks,
applied to nvme-5.13.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2021-05-19 6:43 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-11 4:56 [PATCH] nvme-fc: clear q_live at beginning of association teardown James Smart
2021-05-11 4:56 ` James Smart
2021-05-12 23:03 ` Sagi Grimberg
2021-05-12 23:03 ` Sagi Grimberg
2021-05-13 18:47 ` James Smart
2021-05-13 18:47 ` James Smart
2021-05-13 19:59 ` Sagi Grimberg
2021-05-13 19:59 ` Sagi Grimberg
2021-05-13 14:42 ` Himanshu Madhani
2021-05-13 14:42 ` Himanshu Madhani
2021-05-14 9:20 ` Hannes Reinecke
2021-05-14 9:20 ` Hannes Reinecke
2021-05-19 6:42 ` Christoph Hellwig
2021-05-19 6:42 ` Christoph Hellwig
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.