linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] nvmet: Always remove processed AER elements from list
@ 2019-11-03 20:13 Chaitanya Kulkarni
  2019-11-04  8:13 ` Daniel Wagner
  0 siblings, 1 reply; 11+ messages in thread
From: Chaitanya Kulkarni @ 2019-11-03 20:13 UTC (permalink / raw)
  To: linux-nvme; +Cc: Chaitanya Kulkarni, jthumshirn, sagi, dwagner, hch

From: Daniel Wagner <dwagner@suse.de>

All async events are enqueued via nvmet_add_async_event() which
updates the ctrl->async_event_cmds[] array and additionally an struct
nvmet_async_event is added to the ctrl->async_events list.

Under normal operations the nvmet_async_event_work() updates again the
ctrl->async_event_cmds and removes the corresponding struct
nvmet_async_event from the list again. Though nvmet_sq_destroy() could
be called which calles nvmet_async_events_free() which only updates
the ctrl->async_event_cmds[] array.

Add a new function nvmet_async_events_process() which processes the
async events and updates both array and the list. With this we avoid
having two places where the array and list are modified.

When the status value is set != NVME_SC_SUCCESS that implies
nvmet_async_events_process() is called from free events context.
In this case after clearing the aen present on the ctrl->async_list we
also loop over ctrl->async_event_cmds[] for any requests posted by the
host for which we don't have the AEN in the ctrl->async_events list.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Daniel Wagner <dwagner@suse.de>
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---

Hi,

I've added the code to loop over and clear out outstanding requests
present in the ctrl->async_event_cmds[] for which aen is not generated.
Also updated the patch description.

I did the basic testing create/delete ctrl and enable/disable ns that
did not produced any hands or errors.

Regards,
Chaitanya

Following is the test log if anyone wants to take a look:-

nvmet: adding nsid 1 to subsystem fs
nvmet: creating controller 1 for subsystem fs for NQN \
nqn.2014-08.org.nvmexpress:uuid:a0b58ccb-6b6a-4045-8ee1-641816bf548d.
nvme nvme1: Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.
nvme nvme1: creating 12 I/O queues.
nvme nvme1: new ctrl: "fs"
# nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
/dev/nvme1n1 c71f2d6a3f577a29 Linux 1 524.29 MB / 524.29 MB 1 B + 9 B 5.4.0-rc
# cat /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable 
1
# nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
/dev/nvme1n1 c71f2d6a3f577a29 Linux 1 524.29 MB / 524.29 MB 1 B + 9 B 5.4.0-rc
# echo 0 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable 
# nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
# echo 1 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable 
# nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
/dev/nvme1n1 c71f2d6a3f577a29 Linux 1 524.29 MB / 524.29 MB 1 B + 9 B 5.4.0-rc
# echo 0 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable 
# nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
# echo 1 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable 


---
 drivers/nvme/target/core.c | 49 ++++++++++++++++++++++----------------
 1 file changed, 28 insertions(+), 21 deletions(-)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 28438b833c1b..5f7f6fd864fb 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -129,11 +129,32 @@ static u32 nvmet_async_event_result(struct nvmet_async_event *aen)
 	return aen->event_type | (aen->event_info << 8) | (aen->log_page << 16);
 }
 
-static void nvmet_async_events_free(struct nvmet_ctrl *ctrl)
+static void nvmet_async_events_process(struct nvmet_ctrl *ctrl, u16 status)
 {
+	struct nvmet_async_event *aen;
 	struct nvmet_req *req;
 
 	while (1) {
+		mutex_lock(&ctrl->lock);
+		aen = list_first_entry_or_null(&ctrl->async_events,
+				struct nvmet_async_event, entry);
+		if (!aen || !ctrl->nr_async_event_cmds) {
+			mutex_unlock(&ctrl->lock);
+			break;
+		}
+
+		req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
+		if (status == 0)
+			nvmet_set_result(req, nvmet_async_event_result(aen));
+
+		list_del(&aen->entry);
+		kfree(aen);
+
+		mutex_unlock(&ctrl->lock);
+		nvmet_req_complete(req, status);
+	}
+
+	while (status) {
 		mutex_lock(&ctrl->lock);
 		if (!ctrl->nr_async_event_cmds) {
 			mutex_unlock(&ctrl->lock);
@@ -146,31 +167,17 @@ static void nvmet_async_events_free(struct nvmet_ctrl *ctrl)
 	}
 }
 
+static void nvmet_async_events_free(struct nvmet_ctrl *ctrl)
+{
+	nvmet_async_events_process(ctrl, NVME_SC_INTERNAL | NVME_SC_DNR);
+}
+
 static void nvmet_async_event_work(struct work_struct *work)
 {
 	struct nvmet_ctrl *ctrl =
 		container_of(work, struct nvmet_ctrl, async_event_work);
-	struct nvmet_async_event *aen;
-	struct nvmet_req *req;
-
-	while (1) {
-		mutex_lock(&ctrl->lock);
-		aen = list_first_entry_or_null(&ctrl->async_events,
-				struct nvmet_async_event, entry);
-		if (!aen || !ctrl->nr_async_event_cmds) {
-			mutex_unlock(&ctrl->lock);
-			return;
-		}
-
-		req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
-		nvmet_set_result(req, nvmet_async_event_result(aen));
 
-		list_del(&aen->entry);
-		kfree(aen);
-
-		mutex_unlock(&ctrl->lock);
-		nvmet_req_complete(req, 0);
-	}
+	nvmet_async_events_process(ctrl, 0);
 }
 
 void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type,
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-11-03 20:13 [PATCH] nvmet: Always remove processed AER elements from list Chaitanya Kulkarni
@ 2019-11-04  8:13 ` Daniel Wagner
  2019-11-04  9:50   ` Johannes Thumshirn
  2019-11-04 20:41   ` Chaitanya Kulkarni
  0 siblings, 2 replies; 11+ messages in thread
From: Daniel Wagner @ 2019-11-04  8:13 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: hch, sagi, linux-nvme, jthumshirn

Hi Chaitanya,

On Sun, Nov 03, 2019 at 12:13:10PM -0800, Chaitanya Kulkarni wrote:
> I've added the code to loop over and clear out outstanding requests
> present in the ctrl->async_event_cmds[] for which aen is not generated.
> Also updated the patch description.
> 
> I did the basic testing create/delete ctrl and enable/disable ns that
> did not produced any hands or errors.

Thanks for addressing Christoph's comment. I'll get this patch into
our customer's test setup.

> Following is the test log if anyone wants to take a look:-
> 
> nvmet: adding nsid 1 to subsystem fs
> nvmet: creating controller 1 for subsystem fs for NQN \
> nqn.2014-08.org.nvmexpress:uuid:a0b58ccb-6b6a-4045-8ee1-641816bf548d.
> nvme nvme1: Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.
> nvme nvme1: creating 12 I/O queues.
> nvme nvme1: new ctrl: "fs"
> # nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
> /dev/nvme1n1 c71f2d6a3f577a29 Linux 1 524.29 MB / 524.29 MB 1 B + 9 B 5.4.0-rc
> # cat /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable 
> 1
> # nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
> /dev/nvme1n1 c71f2d6a3f577a29 Linux 1 524.29 MB / 524.29 MB 1 B + 9 B 5.4.0-rc
> # echo 0 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable 
> # nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
> # echo 1 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable 
> # nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
> /dev/nvme1n1 c71f2d6a3f577a29 Linux 1 524.29 MB / 524.29 MB 1 B + 9 B 5.4.0-rc
> # echo 0 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable 
> # nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
> # echo 1 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable

BTW, I got feedback how to produce the oops on our custerms setup:

Test Steps:
1. Create two targets on node n1 as target side (tgt1 and tgt2)
2. Run perf to write data to tgt2:
   sudo ./perf -q 1 -w read -o 4096 -t 60 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4421'
3. Deleted tgt2 node n1 configuration during perf execution
   and reboot the node n1 automatically.

I hope I got the steps summerized correctly.

Thanks,
Daniel

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-11-04  8:13 ` Daniel Wagner
@ 2019-11-04  9:50   ` Johannes Thumshirn
  2019-11-04 10:19     ` Daniel Wagner
  2019-11-04 20:41   ` Chaitanya Kulkarni
  1 sibling, 1 reply; 11+ messages in thread
From: Johannes Thumshirn @ 2019-11-04  9:50 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: hch, linux-nvme, Chaitanya Kulkarni, sagi

On Mon, Nov 04, 2019 at 09:13:38AM +0100, Daniel Wagner wrote:
[...]
> BTW, I got feedback how to produce the oops on our custerms setup:
> 
> Test Steps:
> 1. Create two targets on node n1 as target side (tgt1 and tgt2)
> 2. Run perf to write data to tgt2:
>    sudo ./perf -q 1 -w read -o 4096 -t 60 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4421'
> 3. Deleted tgt2 node n1 configuration during perf execution
>    and reboot the node n1 automatically.

I assume thhis "perf" is some kind of performance test. So it should be
possible to create a blktest recreating this issue.

I'd translate the "perf" line into something similar to 'fio -rw=read -bs 4k \
--time_based --timeout=60 --iodepth=1' and then running on a mpathed
nvme-loop and delete one of the controllers while the IO is running. This
should be a fairly usual multi-path test. The only thing I can't see yet
is, how we end up with unprocessed AENs here.

Byte,
	Johannes

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-11-04  9:50   ` Johannes Thumshirn
@ 2019-11-04 10:19     ` Daniel Wagner
  2019-11-08 10:42       ` Daniel Wagner
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Wagner @ 2019-11-04 10:19 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: hch, linux-nvme, Chaitanya Kulkarni, sagi

On Mon, Nov 04, 2019 at 10:50:34AM +0100, Johannes Thumshirn wrote:
> On Mon, Nov 04, 2019 at 09:13:38AM +0100, Daniel Wagner wrote:
> [...]
> > BTW, I got feedback how to produce the oops on our custerms setup:
> > 
> > Test Steps:
> > 1. Create two targets on node n1 as target side (tgt1 and tgt2)
> > 2. Run perf to write data to tgt2:
> >    sudo ./perf -q 1 -w read -o 4096 -t 60 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4421'
> > 3. Deleted tgt2 node n1 configuration during perf execution
> >    and reboot the node n1 automatically.

sudo rmdir subsystems/199212f7-33d0-46c3-b362-e47a217c2c49/namespaces/2
sudo unlink /sys/kernel/config/nvmet/ports/2/subsystems/199212f7-33d0-46c3-b362-e47a217c2c49
sudo rmdir subsystems/199212f7-33d0-46c3-b362-e47a217c2c49/
sudo rmdir ports/2/

> I assume thhis "perf" is some kind of performance test. So it should be
> possible to create a blktest recreating this issue.
> 
> I'd translate the "perf" line into something similar to 'fio -rw=read -bs 4k \
> --time_based --timeout=60 --iodepth=1' and then running on a mpathed
> nvme-loop and delete one of the controllers while the IO is running. This
> should be a fairly usual multi-path test. The only thing I can't see yet
> is, how we end up with unprocessed AENs here.

The 'perf' command seems to be able to write to a specific target. At
least I read this from from another command I see in the feedback I
got:

  Run perf to write data to tgt1 and tgt2 for a long time , command:
    sudo ./perf -q 1 -w read -o 4096 -t 600000 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4420'
    sudo ./perf -q 1 -w read -o 4096 -t 600000 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4421'


Anyway, also have a few lines from dmesg before the crash (this is
without the patch):

[ 1507.453659] nvmet: creating controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0052-3910-8036-c7c04f315332.
[ 1507.453794] nvme nvme12: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 192.168.219.4:4421
[ 1507.453925] nvme nvme12: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
[ 1926.227086] nvmet: creating controller 1 for subsystem 416d7463-d3c2-41e4-b2f5-76df2ed3db26 for NQN nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0052-4210-8033-c7c04f315332.
[ 1927.992304] nvmet: creating controller 2 for subsystem 199212f7-33d0-46c3-b362-e47a217c2c49 for NQN nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0052-4210-8033-c7c04f315332.
[ 2135.800592] nvmet: adding nsid 2 to subsystem 199212f7-33d0-46c3-b362-e47a217c2c49
[ 2137.174479] nvmet_rdma: enabling port 2 (192.168.219.4:4421)
[ 2138.597566] nvmet: creating controller 2 for subsystem 199212f7-33d0-46c3-b362-e47a217c2c49 for NQN nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0052-4210-8033-c7c04f315332.
[ 2912.103808] nvmet: ctrl 2 keep-alive timer (15 seconds) expired!
[ 2912.103812] nvmet: ctrl 2 fatal error occurred!
[ 4005.326810] nvmet: creating controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN 2014-08.org.nvmexpress:uuid:a9ebf41b-201c-42e1-9b09-4dd7fb70e2c5.
[ 4005.329045] nvmet: creating controller 2 for subsystem 416d7463-d3c2-41e4-b2f5-76df2ed3db26 for NQN 2014-08.org.nvmexpress:uuid:a9ebf41b-201c-42e1-9b09-4dd7fb70e2c5.
[ 4068.052392] nvmet: adding nsid 2 to subsystem 199212f7-33d0-46c3-b362-e47a217c2c49
[ 4068.106984] nvmet_rdma: enabling port 2 (192.168.219.4:4421)
[ 4078.581846] nvmet: creating controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN 2014-08.org.nvmexpress:uuid:742c21ef-fcee-4b64-aec1-6063ae5710e4.
[ 4078.583849] nvmet: creating controller 2 for subsystem 199212f7-33d0-46c3-b362-e47a217c2c49 for NQN 2014-08.org.nvmexpress:uuid:742c21ef-fcee-4b64-aec1-6063ae5710e4.
[ 5076.654122] nvmet: adding nsid 1 to subsystem 416d7463-d3c2-41e4-b2f5-76df2ed3db26
[ 5076.720100] nvmet_rdma: enabling port 1 (192.168.219.4:4420)
[ 5076.744432] nvmet: adding nsid 2 to subsystem 199212f7-33d0-46c3-b362-e47a217c2c49
[ 5076.804401] nvmet_rdma: enabling port 2 (192.168.219.4:4421)
[ 5115.805550] nvmet: creating controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN 2014-08.org.nvmexpress:uuid:988626aa-4466-44dc-8142-6bfc382192e3.
[ 5115.807714] nvmet: creating controller 2 for subsystem 199212f7-33d0-46c3-b362-e47a217c2c49 for NQN 2014-08.org.nvmexpress:uuid:988626aa-4466-44dc-8142-6bfc382192e3.
[ 5136.186393] general protection fault: 0000 [#1] SMP PTI
[ 5136.191643] CPU: 14 PID: 54995 Comm: kworker/14:1 Tainted: G           OE      4.12.14-95.19.1.17807.0.PTF.1134097-default #1 SLE12-SP4
[ 5136.203873] Hardware name: Dell Inc. PowerEdge R740xd/01KPX8, BIOS 1.5.4 07/30/2018
[ 5136.211554] Workqueue: events nvmet_async_event_work [nvmet]
[ 5136.217230] task: ffff983bdb11d0c0 task.stack: ffffafc449b58000
[ 5136.223169] RIP: 0010:nvmet_async_event_work+0x5e/0xb0 [nvmet]
[ 5136.229017] RSP: 0018:ffffafc449b5be58 EFLAGS: 00010202
[ 5136.234258] RAX: dead000000000100 RBX: ffff983b99b830b0 RCX: 0000000000000002
[ 5136.241404] RDX: 0000000000040002 RSI: 38ffff983bdb3c05 RDI: ffff983bdb9412c0
[ 5136.248552] RBP: ffff983b99b83018 R08: 00000000fffffff0 R09: 0000000000000000
[ 5136.255709] R10: fffffffffffffff4 R11: ffffffffffffff83 R12: ffff983b99b830a0
[ 5136.262887] R13: ffff983bcaa55f29 R14: 0000000000000000 R15: 0ffff983c501e860
[ 5136.270045] FS:  0000000000000000(0000) GS:ffff983c501c0000(0000) knlGS:0000000000000000
[ 5136.278160] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5136.283925] CR2: 00007f06d9f246e0 CR3: 00000032ed00a001 CR4: 00000000007606e0
[ 5136.291073] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 5136.298236] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 5136.305398] PKRU: 55555554
[ 5136.308121] Call Trace:
[ 5136.310587]  process_one_work+0x14c/0x390
[ 5136.314619]  worker_thread+0x47/0x3e0
[ 5136.318294]  kthread+0xff/0x140
[ 5136.321462]  ? max_active_store+0x60/0x60
[ 5136.325503]  ? __kthread_parkme+0x70/0x70
[ 5136.329526]  ret_from_fork+0x35/0x40
[ 5136.333131] Code: b8 00 01 00 00 00 00 ad de 89 53 e8 4c 8b 6c d3 c8 0f b6 4f 12 0f b6 57 11 49 8b 75 08 c1 e1 10 c1 e2 08 09 ca 0f b6 4f 10 09 ca <89> 16 48 8b 0f 48 8b 57 08 48 89 51 08 48 89 0a 48 89 07 66 b8


At 2912 tgt2 is propably deleted. I wonder why there are two "enabling
port 2" after it.

Thanks,
Daniel

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-11-04  8:13 ` Daniel Wagner
  2019-11-04  9:50   ` Johannes Thumshirn
@ 2019-11-04 20:41   ` Chaitanya Kulkarni
  1 sibling, 0 replies; 11+ messages in thread
From: Chaitanya Kulkarni @ 2019-11-04 20:41 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: hch, sagi, linux-nvme, jthumshirn

On 11/04/2019 12:13 AM, Daniel Wagner wrote:
>> >I did the basic testing create/delete ctrl and enable/disable ns that
>> >did not produced any hands or errors.
> Thanks for addressing Christoph's comment. I'll get this patch into
> our customer's test setup.
>
>> >Following is the test log if anyone wants to take a look:-
>> # echo 0 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable
>> ># nvme list | tr -s ' ' ' ' | grep Linux | grep -v '\-\-\-\-'
>> ># echo 1 > /sys/kernel/config/nvmet/subsystems/fs/namespaces/1/enable
> BTW, I got feedback how to produce the oops on our custerms setup:
>
You are welcome.

> Test Steps:
> 1. Create two targets on node n1 as target side (tgt1 and tgt2)
> 2. Run perf to write data to tgt2:
>     sudo ./perf -q 1 -w read -o 4096 -t 60 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4421'
> 3. Deleted tgt2 node n1 configuration during perf execution
>     and reboot the node n1 automatically.
>
> I hope I got the steps summerized correctly.
>
Interesting let me have a look at this later this week.
> Thanks,
> Daniel
>


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-11-04 10:19     ` Daniel Wagner
@ 2019-11-08 10:42       ` Daniel Wagner
  2019-11-10  2:25         ` Chaitanya Kulkarni
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Wagner @ 2019-11-08 10:42 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: Chaitanya Kulkarni, hch, linux-nvme, sagi

On Mon, Nov 04, 2019 at 11:19:35AM +0100, Daniel Wagner wrote:
>   Run perf to write data to tgt1 and tgt2 for a long time , command:
>     sudo ./perf -q 1 -w read -o 4096 -t 600000 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4420'
>     sudo ./perf -q 1 -w read -o 4096 -t 600000 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4421'

This tool is from the SPDK project:

https://github.com/spdk/spdk/tree/master/examples/nvme/perf


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-11-08 10:42       ` Daniel Wagner
@ 2019-11-10  2:25         ` Chaitanya Kulkarni
  2019-11-11  7:28           ` Daniel Wagner
  2019-12-05  9:23           ` Daniel Wagner
  0 siblings, 2 replies; 11+ messages in thread
From: Chaitanya Kulkarni @ 2019-11-10  2:25 UTC (permalink / raw)
  To: Daniel Wagner, Johannes Thumshirn; +Cc: hch, linux-nvme, sagi

Hi Daniel,

Did you see the same problem with the patch I've provided ?

On 11/08/2019 02:42 AM, Daniel Wagner wrote:
> On Mon, Nov 04, 2019 at 11:19:35AM +0100, Daniel Wagner wrote:
>>    Run perf to write data to tgt1 and tgt2 for a long time , command:
>>      sudo ./perf -q 1 -w read -o 4096 -t 600000 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4420'
>>      sudo ./perf -q 1 -w read -o 4096 -t 600000 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.219.4 trsvcid:4421'
>
> This tool is from the SPDK project:
>
> https://github.com/spdk/spdk/tree/master/examples/nvme/perf
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
>


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-11-10  2:25         ` Chaitanya Kulkarni
@ 2019-11-11  7:28           ` Daniel Wagner
  2019-12-05  9:23           ` Daniel Wagner
  1 sibling, 0 replies; 11+ messages in thread
From: Daniel Wagner @ 2019-11-11  7:28 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: sagi, hch, linux-nvme, Johannes Thumshirn

Hi Chaitanya,

> Did you see the same problem with the patch I've provided ?

I haven't heard anything back from our customer so far. I am
recreating the setup and see if I can reproduce the crash. I'll let
you know how this goes.

Thanks,
Daniel

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-11-10  2:25         ` Chaitanya Kulkarni
  2019-11-11  7:28           ` Daniel Wagner
@ 2019-12-05  9:23           ` Daniel Wagner
  2019-12-11  7:39             ` Chaitanya Kulkarni
  1 sibling, 1 reply; 11+ messages in thread
From: Daniel Wagner @ 2019-12-05  9:23 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: sagi, hch, linux-nvme, Johannes Thumshirn

Hi Chaitanya,

On Sun, Nov 10, 2019 at 02:25:21AM +0000, Chaitanya Kulkarni wrote:
> Did you see the same problem with the patch I've provided ?

I don't have the hardware to test on. I was unable to reproduce it
with TCP as transport and virtual machines. Unfortunatly, the
customer decided to 'solve' this problem by removing the AER handling
code in the SPDK stack and gave no feedback if this patch solves
the problem or not.

Since we can't really verify it I am not sure if you should pick the
patch.

Thanks,
Daniel

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-12-05  9:23           ` Daniel Wagner
@ 2019-12-11  7:39             ` Chaitanya Kulkarni
  2019-12-12  9:35               ` hch
  0 siblings, 1 reply; 11+ messages in thread
From: Chaitanya Kulkarni @ 2019-12-11  7:39 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: sagi, hch, linux-nvme, Johannes Thumshirn

Christoph/Sagi,

Should we get this patch in ?

On 12/05/2019 01:24 AM, Daniel Wagner wrote:
> Hi Chaitanya,
>
> On Sun, Nov 10, 2019 at 02:25:21AM +0000, Chaitanya Kulkarni wrote:
>> Did you see the same problem with the patch I've provided ?
>
> I don't have the hardware to test on. I was unable to reproduce it
> with TCP as transport and virtual machines. Unfortunatly, the
> customer decided to 'solve' this problem by removing the AER handling
> code in the SPDK stack and gave no feedback if this patch solves
> the problem or not.
>
> Since we can't really verify it I am not sure if you should pick the
> patch.
>
> Thanks,
> Daniel
>


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvmet: Always remove processed AER elements from list
  2019-12-11  7:39             ` Chaitanya Kulkarni
@ 2019-12-12  9:35               ` hch
  0 siblings, 0 replies; 11+ messages in thread
From: hch @ 2019-12-12  9:35 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: sagi, linux-nvme, hch, Daniel Wagner, Johannes Thumshirn

Please resend the current version.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-12-12  9:35 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-03 20:13 [PATCH] nvmet: Always remove processed AER elements from list Chaitanya Kulkarni
2019-11-04  8:13 ` Daniel Wagner
2019-11-04  9:50   ` Johannes Thumshirn
2019-11-04 10:19     ` Daniel Wagner
2019-11-08 10:42       ` Daniel Wagner
2019-11-10  2:25         ` Chaitanya Kulkarni
2019-11-11  7:28           ` Daniel Wagner
2019-12-05  9:23           ` Daniel Wagner
2019-12-11  7:39             ` Chaitanya Kulkarni
2019-12-12  9:35               ` hch
2019-11-04 20:41   ` Chaitanya Kulkarni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).