All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] Large amount of read requests after connection
@ 2017-11-28  0:46 Liu, Changpeng
  0 siblings, 0 replies; 9+ messages in thread
From: Liu, Changpeng @ 2017-11-28  0:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5303 bytes --]

Hi Meng,

From SPDK’s AIO code, the AIO backend cannot support UNMAP command, and NVMoF target will not fill the identify DSM bit,
so the initiator should not send the DSM command.

There may be something wrong with your system, can you disconnect the target first and reconnect to AIO backend to see any exception ?


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Meng Wang
Sent: Tuesday, November 28, 2017 4:14 AM
To: spdk(a)lists.01.org
Subject: [SPDK] Large amount of read requests after connection

Hello,
Recently, I profiled nvmf with AIO device, and found there are large amount of read requests after connection with target. Here is the command:

```
In server:
./app/nvmf_tgt/nvmf_tgt  -t nvmf -c run/aio.conf

In client:
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s 4420
```
After connection from client, the server prints out the nvme traces. Here is an excerpt:

```
......
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 0 cdw10 0x00000001
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d79000 key 0xa8044f4d len 0x1000
ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd 0x400
ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*: Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support io_type 3
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*: Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support io_type 8
ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ioccsz 0x104
ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: iorcsz 0x1
ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: icdoff 0x0
ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ctrattr 0x0
ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd 0x1
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid 31 nsid 0 cdw10 0x00000000
ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event Request
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 0 cdw10 0x00000002
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7d000 key 0xa8044f4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1258192000 key 0xa8056e4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000008
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1243239000 key 0xa8056e4d len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000018
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257f71000 key 0xa8056e4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000

......
 <millions of lines>
......
```

In  this trace, the line such as
```
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
```
repeats millions of times. As the I/O opc 0x02 is read, it means millions of read requests executed after client connection. As a comparison, we profile nvmf with NVMe device on same host and SSD. And merely ~50 read requests after client connection. So what are these read requests for? And why there are so many reads in aio devices?

Thanks,
--
Meng

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 59038 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Large amount of read requests after connection
@ 2017-11-28  7:02 Meng Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Meng Wang @ 2017-11-28  7:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9207 bytes --]

Changpeng,
Thank you so much for this patch.

On Mon, Nov 27, 2017 at 10:58 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
wrote:

> Hi Meng,
>
>
>
> Here is the first version patch to fix the issue:
>
> https://review.gerrithub.io/#/c/389245/
>
>
>
> I may change it for comments, definitely we may fix it in few days.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Meng Wang
> *Sent:* Tuesday, November 28, 2017 2:43 PM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Large amount of read requests after connection
>
>
>
> Changpeng,
>
>
>
> Good to know you pinpoint the problem.
>
>
>
> Is there any quick workaround to disable these reads when connection? Or
> could you point out which function or which file is most related to this
> problem? So that I can dive into this problem and think up some workarounds.
>
>
>
> On Mon, Nov 27, 2017 at 10:09 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
> wrote:
>
> Thanks for reporting Meng.
>
>
>
> There is a bug in SPDK for AIO backend, nvmf get data buffer from mempool,
>
>
>
>
> the memory is not aligned with block size,  which cannot meet AIO
> library’s requirement, so there is an error for this case.
>
>
>
> I’ll submit a patch to fix this.
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Meng Wang
> *Sent:* Tuesday, November 28, 2017 10:18 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Large amount of read requests after connection
>
>
>
> Hello Changpeng,
>
>
>
> Thanks for your reply. I connect target with
>
> ```
>
> nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s
> 4420
>
> ```
>
> And then disconnect the target with
>
> ```
>
> nvme disconnect -n "nqn.2016-06.io.spdk:cnode1"
>
> ```
>
>
>
> I tried 2 times without stopping the nvmf_tgt process in target. For the
> first time, the target host prints many lines of
>
> ```
>
> ctrlr_bdev.c: 165:nvmf_bdev_ctrlr_read_cmd: *ERROR*: end of media
>
> ```
>
> And for the second time, the initiator hanged a long time and finally
> return.
>
>
>
>
>
> During the connection, we run benchmark in initiator side and the
> benchmark can finish successfully. But the large amount of read impact
> performance a lot.
>
> We don't know where are they from. Is there any way to disable these read?
>
>
>
>
>
> Thanks,
>
> Meng
>
>
>
> On Mon, Nov 27, 2017 at 4:46 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
> wrote:
>
> Hi Meng,
>
>
>
> From SPDK’s AIO code, the AIO backend cannot support UNMAP command, and
> NVMoF target will not fill the identify DSM bit,
>
> so the initiator should not send the DSM command.
>
>
>
> There may be something wrong with your system, can you disconnect the
> target first and reconnect to AIO backend to see any exception ?
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Meng Wang
> *Sent:* Tuesday, November 28, 2017 4:14 AM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] Large amount of read requests after connection
>
>
>
> Hello,
>
> Recently, I profiled nvmf with AIO device, and found there are large
> amount of read requests after connection with target. Here is the command:
>
>
>
> ```
>
> In server:
>
> ./app/nvmf_tgt/nvmf_tgt  -t nvmf -c run/aio.conf
>
>
>
> In client:
>
> nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s
> 4420
>
> ```
>
> After connection from client, the server prints out the nvme traces. Here
> is an excerpt:
>
>
>
> ```
>
> ......
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 0 cdw10 0x00000001
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d79000 key 0xa8044f4d len 0x1000
>
> ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd
> 0x400
>
> ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
>
> ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
> Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
> io_type 3
>
> ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
> Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
> io_type 8
>
> ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> ioccsz 0x104
>
> ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> iorcsz 0x1
>
> ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> icdoff 0x0
>
> ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> ctrattr 0x0
>
> ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd
> 0x1
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid
> 31 nsid 0 cdw10 0x00000000
>
> ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event
> Request
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 0 cdw10 0x00000002
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7d000 key 0xa8044f4e len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7f000 key 0xa8044f4f len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7f000 key 0xa8044f4f len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1258192000 key 0xa8056e4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000008
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1243239000 key 0xa8056e4d len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000018
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257f71000 key 0xa8056e4e len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x6fc81a00
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x84da42000 key 0xa804ee4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
>
>
> ......
>
>  <millions of lines>
>
> ......
>
> ```
>
>
>
> In  this trace, the line such as
>
> ```
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x6fc81a00
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x84da42000 key 0xa804ee4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> ```
>
> repeats millions of times. As the I/O opc 0x02 is read, it means millions
> of read requests executed after client connection. As a comparison, we
> profile nvmf with NVMe device on same host and SSD. And merely ~50 read
> requests after client connection. So what are these read requests for? And
> why there are so many reads in aio devices?
>
>
>
> Thanks,
>
> --
>
> Meng
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
> --
>
> Meng Wang
>
> Electrical and Computer Engineering
>
> University of Florida
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
> --
>
> Meng Wang
>
> Electrical and Computer Engineering
>
> University of Florida
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>


-- 
Meng Wang
Electrical and Computer Engineering
University of Florida

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 57430 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Large amount of read requests after connection
@ 2017-11-28  6:58 Liu, Changpeng
  0 siblings, 0 replies; 9+ messages in thread
From: Liu, Changpeng @ 2017-11-28  6:58 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8177 bytes --]

Hi Meng,

Here is the first version patch to fix the issue:
https://review.gerrithub.io/#/c/389245/

I may change it for comments, definitely we may fix it in few days.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Meng Wang
Sent: Tuesday, November 28, 2017 2:43 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Large amount of read requests after connection

Changpeng,

Good to know you pinpoint the problem.

Is there any quick workaround to disable these reads when connection? Or could you point out which function or which file is most related to this problem? So that I can dive into this problem and think up some workarounds.

On Mon, Nov 27, 2017 at 10:09 PM, Liu, Changpeng <changpeng.liu(a)intel.com<mailto:changpeng.liu(a)intel.com>> wrote:
Thanks for reporting Meng.

There is a bug in SPDK for AIO backend, nvmf get data buffer from mempool,

the memory is not aligned with block size,  which cannot meet AIO library’s requirement, so there is an error for this case.

I’ll submit a patch to fix this.


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Meng Wang
Sent: Tuesday, November 28, 2017 10:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Large amount of read requests after connection

Hello Changpeng,

Thanks for your reply. I connect target with
```
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s 4420
```
And then disconnect the target with
```
nvme disconnect -n "nqn.2016-06.io.spdk:cnode1"
```

I tried 2 times without stopping the nvmf_tgt process in target. For the first time, the target host prints many lines of
```
ctrlr_bdev.c: 165:nvmf_bdev_ctrlr_read_cmd: *ERROR*: end of media
```
And for the second time, the initiator hanged a long time and finally return.


During the connection, we run benchmark in initiator side and the benchmark can finish successfully. But the large amount of read impact performance a lot.
We don't know where are they from. Is there any way to disable these read?


Thanks,
Meng

On Mon, Nov 27, 2017 at 4:46 PM, Liu, Changpeng <changpeng.liu(a)intel.com<mailto:changpeng.liu(a)intel.com>> wrote:
Hi Meng,

From SPDK’s AIO code, the AIO backend cannot support UNMAP command, and NVMoF target will not fill the identify DSM bit,
so the initiator should not send the DSM command.

There may be something wrong with your system, can you disconnect the target first and reconnect to AIO backend to see any exception ?


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Meng Wang
Sent: Tuesday, November 28, 2017 4:14 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] Large amount of read requests after connection

Hello,
Recently, I profiled nvmf with AIO device, and found there are large amount of read requests after connection with target. Here is the command:

```
In server:
./app/nvmf_tgt/nvmf_tgt  -t nvmf -c run/aio.conf

In client:
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s 4420
```
After connection from client, the server prints out the nvme traces. Here is an excerpt:

```
......
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 0 cdw10 0x00000001
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d79000 key 0xa8044f4d len 0x1000
ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd 0x400
ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*: Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support io_type 3
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*: Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support io_type 8
ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ioccsz 0x104
ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: iorcsz 0x1
ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: icdoff 0x0
ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ctrattr 0x0
ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd 0x1
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid 31 nsid 0 cdw10 0x00000000
ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event Request
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 0 cdw10 0x00000002
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7d000 key 0xa8044f4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1258192000 key 0xa8056e4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000008
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1243239000 key 0xa8056e4d len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000018
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257f71000 key 0xa8056e4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000

......
 <millions of lines>
......
```

In  this trace, the line such as
```
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
```
repeats millions of times. As the I/O opc 0x02 is read, it means millions of read requests executed after client connection. As a comparison, we profile nvmf with NVMe device on same host and SSD. And merely ~50 read requests after client connection. So what are these read requests for? And why there are so many reads in aio devices?

Thanks,
--
Meng

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk



--
Meng Wang
Electrical and Computer Engineering
University of Florida

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk



--
Meng Wang
Electrical and Computer Engineering
University of Florida

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 93538 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Large amount of read requests after connection
@ 2017-11-28  6:43 Meng Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Meng Wang @ 2017-11-28  6:43 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8296 bytes --]

Changpeng,

Good to know you pinpoint the problem.

Is there any quick workaround to disable these reads when connection? Or
could you point out which function or which file is most related to this
problem? So that I can dive into this problem and think up some workarounds.

On Mon, Nov 27, 2017 at 10:09 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
wrote:

> Thanks for reporting Meng.
>
>
>
> There is a bug in SPDK for AIO backend, nvmf get data buffer from mempool,
>
>
>
>
> the memory is not aligned with block size,  which cannot meet AIO
> library’s requirement, so there is an error for this case.
>
>
>
> I’ll submit a patch to fix this.
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Meng Wang
> *Sent:* Tuesday, November 28, 2017 10:18 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Large amount of read requests after connection
>
>
>
> Hello Changpeng,
>
>
>
> Thanks for your reply. I connect target with
>
> ```
>
> nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s
> 4420
>
> ```
>
> And then disconnect the target with
>
> ```
>
> nvme disconnect -n "nqn.2016-06.io.spdk:cnode1"
>
> ```
>
>
>
> I tried 2 times without stopping the nvmf_tgt process in target. For the
> first time, the target host prints many lines of
>
> ```
>
> ctrlr_bdev.c: 165:nvmf_bdev_ctrlr_read_cmd: *ERROR*: end of media
>
> ```
>
> And for the second time, the initiator hanged a long time and finally
> return.
>
>
>
>
>
> During the connection, we run benchmark in initiator side and the
> benchmark can finish successfully. But the large amount of read impact
> performance a lot.
>
> We don't know where are they from. Is there any way to disable these read?
>
>
>
>
>
> Thanks,
>
> Meng
>
>
>
> On Mon, Nov 27, 2017 at 4:46 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
> wrote:
>
> Hi Meng,
>
>
>
> From SPDK’s AIO code, the AIO backend cannot support UNMAP command, and
> NVMoF target will not fill the identify DSM bit,
>
> so the initiator should not send the DSM command.
>
>
>
> There may be something wrong with your system, can you disconnect the
> target first and reconnect to AIO backend to see any exception ?
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Meng Wang
> *Sent:* Tuesday, November 28, 2017 4:14 AM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] Large amount of read requests after connection
>
>
>
> Hello,
>
> Recently, I profiled nvmf with AIO device, and found there are large
> amount of read requests after connection with target. Here is the command:
>
>
>
> ```
>
> In server:
>
> ./app/nvmf_tgt/nvmf_tgt  -t nvmf -c run/aio.conf
>
>
>
> In client:
>
> nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s
> 4420
>
> ```
>
> After connection from client, the server prints out the nvme traces. Here
> is an excerpt:
>
>
>
> ```
>
> ......
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 0 cdw10 0x00000001
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d79000 key 0xa8044f4d len 0x1000
>
> ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd
> 0x400
>
> ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
>
> ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
> Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
> io_type 3
>
> ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
> Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
> io_type 8
>
> ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> ioccsz 0x104
>
> ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> iorcsz 0x1
>
> ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> icdoff 0x0
>
> ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> ctrattr 0x0
>
> ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd
> 0x1
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid
> 31 nsid 0 cdw10 0x00000000
>
> ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event
> Request
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 0 cdw10 0x00000002
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7d000 key 0xa8044f4e len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7f000 key 0xa8044f4f len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7f000 key 0xa8044f4f len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1258192000 key 0xa8056e4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000008
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1243239000 key 0xa8056e4d len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000018
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257f71000 key 0xa8056e4e len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x6fc81a00
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x84da42000 key 0xa804ee4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
>
>
> ......
>
>  <millions of lines>
>
> ......
>
> ```
>
>
>
> In  this trace, the line such as
>
> ```
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x6fc81a00
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x84da42000 key 0xa804ee4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> ```
>
> repeats millions of times. As the I/O opc 0x02 is read, it means millions
> of read requests executed after client connection. As a comparison, we
> profile nvmf with NVMe device on same host and SSD. And merely ~50 read
> requests after client connection. So what are these read requests for? And
> why there are so many reads in aio devices?
>
>
>
> Thanks,
>
> --
>
> Meng
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
> --
>
> Meng Wang
>
> Electrical and Computer Engineering
>
> University of Florida
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>


-- 
Meng Wang
Electrical and Computer Engineering
University of Florida

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 47626 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Large amount of read requests after connection
@ 2017-11-28  6:09 Liu, Changpeng
  0 siblings, 0 replies; 9+ messages in thread
From: Liu, Changpeng @ 2017-11-28  6:09 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7044 bytes --]

Thanks for reporting Meng.

There is a bug in SPDK for AIO backend, nvmf get data buffer from mempool,

the memory is not aligned with block size,  which cannot meet AIO library’s requirement, so there is an error for this case.

I’ll submit a patch to fix this.


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Meng Wang
Sent: Tuesday, November 28, 2017 10:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Large amount of read requests after connection

Hello Changpeng,

Thanks for your reply. I connect target with
```
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s 4420
```
And then disconnect the target with
```
nvme disconnect -n "nqn.2016-06.io.spdk:cnode1"
```

I tried 2 times without stopping the nvmf_tgt process in target. For the first time, the target host prints many lines of
```
ctrlr_bdev.c: 165:nvmf_bdev_ctrlr_read_cmd: *ERROR*: end of media
```
And for the second time, the initiator hanged a long time and finally return.


During the connection, we run benchmark in initiator side and the benchmark can finish successfully. But the large amount of read impact performance a lot.
We don't know where are they from. Is there any way to disable these read?


Thanks,
Meng

On Mon, Nov 27, 2017 at 4:46 PM, Liu, Changpeng <changpeng.liu(a)intel.com<mailto:changpeng.liu(a)intel.com>> wrote:
Hi Meng,

From SPDK’s AIO code, the AIO backend cannot support UNMAP command, and NVMoF target will not fill the identify DSM bit,
so the initiator should not send the DSM command.

There may be something wrong with your system, can you disconnect the target first and reconnect to AIO backend to see any exception ?


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Meng Wang
Sent: Tuesday, November 28, 2017 4:14 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] Large amount of read requests after connection

Hello,
Recently, I profiled nvmf with AIO device, and found there are large amount of read requests after connection with target. Here is the command:

```
In server:
./app/nvmf_tgt/nvmf_tgt  -t nvmf -c run/aio.conf

In client:
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s 4420
```
After connection from client, the server prints out the nvme traces. Here is an excerpt:

```
......
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 0 cdw10 0x00000001
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d79000 key 0xa8044f4d len 0x1000
ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd 0x400
ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*: Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support io_type 3
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*: Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support io_type 8
ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ioccsz 0x104
ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: iorcsz 0x1
ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: icdoff 0x0
ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ctrattr 0x0
ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd 0x1
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid 31 nsid 0 cdw10 0x00000000
ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event Request
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 0 cdw10 0x00000002
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7d000 key 0xa8044f4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1258192000 key 0xa8056e4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000008
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1243239000 key 0xa8056e4d len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000018
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257f71000 key 0xa8056e4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000

......
 <millions of lines>
......
```

In  this trace, the line such as
```
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
```
repeats millions of times. As the I/O opc 0x02 is read, it means millions of read requests executed after client connection. As a comparison, we profile nvmf with NVMe device on same host and SSD. And merely ~50 read requests after client connection. So what are these read requests for? And why there are so many reads in aio devices?

Thanks,
--
Meng

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk



--
Meng Wang
Electrical and Computer Engineering
University of Florida

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 81180 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Large amount of read requests after connection
@ 2017-11-28  3:07 Liu, Changpeng
  0 siblings, 0 replies; 9+ messages in thread
From: Liu, Changpeng @ 2017-11-28  3:07 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6492 bytes --]

That’s should be okay, I saw the log you attached saying DSM was not supported, this should be normal log.

I will test the AIO backend later, will update you my test results later.


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Meng Wang
Sent: Tuesday, November 28, 2017 11:02 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Large amount of read requests after connection

Hello Changpeng,

Also, I can be sure DSM command is not issued by initiator, since I use gdb to execute "nvme connect" and set breakpoints on functions nvme_dsm and dsm. Neither is triggered.

I don't know how to analyze the printed traces. Can I ask how did you infer that it is related to DSM command ?

Thanks,

On Mon, Nov 27, 2017 at 4:46 PM, Liu, Changpeng <changpeng.liu(a)intel.com<mailto:changpeng.liu(a)intel.com>> wrote:
Hi Meng,

From SPDK’s AIO code, the AIO backend cannot support UNMAP command, and NVMoF target will not fill the identify DSM bit,
so the initiator should not send the DSM command.

There may be something wrong with your system, can you disconnect the target first and reconnect to AIO backend to see any exception ?


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Meng Wang
Sent: Tuesday, November 28, 2017 4:14 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] Large amount of read requests after connection

Hello,
Recently, I profiled nvmf with AIO device, and found there are large amount of read requests after connection with target. Here is the command:

```
In server:
./app/nvmf_tgt/nvmf_tgt  -t nvmf -c run/aio.conf

In client:
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s 4420
```
After connection from client, the server prints out the nvme traces. Here is an excerpt:

```
......
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 0 cdw10 0x00000001
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d79000 key 0xa8044f4d len 0x1000
ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd 0x400
ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*: Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support io_type 3
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*: Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support io_type 8
ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ioccsz 0x104
ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: iorcsz 0x1
ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: icdoff 0x0
ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ctrattr 0x0
ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd 0x1
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid 31 nsid 0 cdw10 0x00000000
ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event Request
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 0 cdw10 0x00000002
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7d000 key 0xa8044f4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1258192000 key 0xa8056e4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000008
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1243239000 key 0xa8056e4d len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x00000018
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x1257f71000 key 0xa8056e4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000

......
 <millions of lines>
......
```

In  this trace, the line such as
```
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1 nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr 0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1 cdw0=0x00000000 rsvd1=0 status=0x0000
```
repeats millions of times. As the I/O opc 0x02 is read, it means millions of read requests executed after client connection. As a comparison, we profile nvmf with NVMe device on same host and SSD. And merely ~50 read requests after client connection. So what are these read requests for? And why there are so many reads in aio devices?

Thanks,
--
Meng

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk



--
Meng Wang
Electrical and Computer Engineering
University of Florida

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 77314 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Large amount of read requests after connection
@ 2017-11-28  3:02 Meng Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Meng Wang @ 2017-11-28  3:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6473 bytes --]

Hello Changpeng,

Also, I can be sure DSM command is not issued by initiator, since I use gdb
to execute "nvme connect" and set breakpoints on functions nvme_dsm and
dsm. Neither is triggered.

I don't know how to analyze the printed traces. Can I ask how did you infer
that it is related to DSM command ?

Thanks,

On Mon, Nov 27, 2017 at 4:46 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
wrote:

> Hi Meng,
>
>
>
> From SPDK’s AIO code, the AIO backend cannot support UNMAP command, and
> NVMoF target will not fill the identify DSM bit,
>
> so the initiator should not send the DSM command.
>
>
>
> There may be something wrong with your system, can you disconnect the
> target first and reconnect to AIO backend to see any exception ?
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Meng Wang
> *Sent:* Tuesday, November 28, 2017 4:14 AM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] Large amount of read requests after connection
>
>
>
> Hello,
>
> Recently, I profiled nvmf with AIO device, and found there are large
> amount of read requests after connection with target. Here is the command:
>
>
>
> ```
>
> In server:
>
> ./app/nvmf_tgt/nvmf_tgt  -t nvmf -c run/aio.conf
>
>
>
> In client:
>
> nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s
> 4420
>
> ```
>
> After connection from client, the server prints out the nvme traces. Here
> is an excerpt:
>
>
>
> ```
>
> ......
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 0 cdw10 0x00000001
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d79000 key 0xa8044f4d len 0x1000
>
> ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd
> 0x400
>
> ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
>
> ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
> Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
> io_type 3
>
> ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
> Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
> io_type 8
>
> ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> ioccsz 0x104
>
> ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> iorcsz 0x1
>
> ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> icdoff 0x0
>
> ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> ctrattr 0x0
>
> ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd
> 0x1
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid
> 31 nsid 0 cdw10 0x00000000
>
> ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event
> Request
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 0 cdw10 0x00000002
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7d000 key 0xa8044f4e len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7f000 key 0xa8044f4f len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7f000 key 0xa8044f4f len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1258192000 key 0xa8056e4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000008
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1243239000 key 0xa8056e4d len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000018
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257f71000 key 0xa8056e4e len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x6fc81a00
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x84da42000 key 0xa804ee4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
>
>
> ......
>
>  <millions of lines>
>
> ......
>
> ```
>
>
>
> In  this trace, the line such as
>
> ```
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x6fc81a00
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x84da42000 key 0xa804ee4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> ```
>
> repeats millions of times. As the I/O opc 0x02 is read, it means millions
> of read requests executed after client connection. As a comparison, we
> profile nvmf with NVMe device on same host and SSD. And merely ~50 read
> requests after client connection. So what are these read requests for? And
> why there are so many reads in aio devices?
>
>
>
> Thanks,
>
> --
>
> Meng
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>


-- 
Meng Wang
Electrical and Computer Engineering
University of Florida

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 26673 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Large amount of read requests after connection
@ 2017-11-28  2:18 Meng Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Meng Wang @ 2017-11-28  2:18 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6939 bytes --]

Hello Changpeng,

Thanks for your reply. I connect target with
```
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s 4420
```
And then disconnect the target with
```
nvme disconnect -n "nqn.2016-06.io.spdk:cnode1"
```

I tried 2 times without stopping the nvmf_tgt process in target. For the
first time, the target host prints many lines of
```
ctrlr_bdev.c: 165:nvmf_bdev_ctrlr_read_cmd: *ERROR*: end of media
```
And for the second time, the initiator hanged a long time and finally
return.


During the connection, we run benchmark in initiator side and the benchmark
can finish successfully. But the large amount of read impact performance a
lot.
We don't know where are they from. Is there any way to disable these read?


Thanks,
Meng

On Mon, Nov 27, 2017 at 4:46 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
wrote:

> Hi Meng,
>
>
>
> From SPDK’s AIO code, the AIO backend cannot support UNMAP command, and
> NVMoF target will not fill the identify DSM bit,
>
> so the initiator should not send the DSM command.
>
>
>
> There may be something wrong with your system, can you disconnect the
> target first and reconnect to AIO backend to see any exception ?
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Meng Wang
> *Sent:* Tuesday, November 28, 2017 4:14 AM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] Large amount of read requests after connection
>
>
>
> Hello,
>
> Recently, I profiled nvmf with AIO device, and found there are large
> amount of read requests after connection with target. Here is the command:
>
>
>
> ```
>
> In server:
>
> ./app/nvmf_tgt/nvmf_tgt  -t nvmf -c run/aio.conf
>
>
>
> In client:
>
> nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s
> 4420
>
> ```
>
> After connection from client, the server prints out the nvme traces. Here
> is an excerpt:
>
>
>
> ```
>
> ......
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 0 cdw10 0x00000001
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d79000 key 0xa8044f4d len 0x1000
>
> ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd
> 0x400
>
> ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
>
> ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
> Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
> io_type 3
>
> ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
> Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
> io_type 8
>
> ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> ioccsz 0x104
>
> ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> iorcsz 0x1
>
> ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> icdoff 0x0
>
> ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
> ctrattr 0x0
>
> ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd
> 0x1
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid
> 31 nsid 0 cdw10 0x00000000
>
> ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event
> Request
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 0 cdw10 0x00000002
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7d000 key 0xa8044f4e len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7f000 key 0xa8044f4f len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid
> 2 nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257d7f000 key 0xa8044f4f len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000000
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1258192000 key 0xa8056e4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000008
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1243239000 key 0xa8056e4d len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x00000018
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x1257f71000 key 0xa8056e4e len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x6fc81a00
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x84da42000 key 0xa804ee4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
>
>
> ......
>
>  <millions of lines>
>
> ......
>
> ```
>
>
>
> In  this trace, the line such as
>
> ```
>
> request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
> nsid 1 cdw10 0x6fc81a00
>
> request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
> 0x84da42000 key 0xa804ee4c len 0x1000
>
> request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
> cdw0=0x00000000 rsvd1=0 status=0x0000
>
> ```
>
> repeats millions of times. As the I/O opc 0x02 is read, it means millions
> of read requests executed after client connection. As a comparison, we
> profile nvmf with NVMe device on same host and SSD. And merely ~50 read
> requests after client connection. So what are these read requests for? And
> why there are so many reads in aio devices?
>
>
>
> Thanks,
>
> --
>
> Meng
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>


-- 
Meng Wang
Electrical and Computer Engineering
University of Florida

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 27133 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [SPDK] Large amount of read requests after connection
@ 2017-11-27 20:14 Meng Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Meng Wang @ 2017-11-27 20:14 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4815 bytes --]

Hello,
Recently, I profiled nvmf with AIO device, and found there are large amount
of read requests after connection with target. Here is the command:

```
In server:
./app/nvmf_tgt/nvmf_tgt  -t nvmf -c run/aio.conf

In client:
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s 4420
```
After connection from client, the server prints out the nvme traces. Here
is an excerpt:

```
......
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2
nsid 0 cdw10 0x00000001
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257d79000 key 0xa8044f4d len 0x1000
ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd
0x400
ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
io_type 3
ctrlr_bdev.c:  69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
io_type 8
ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ioccsz
0x104
ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: iorcsz
0x1
ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: icdoff
0x0
ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
ctrattr 0x0
ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd
0x1
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid
31 nsid 0 cdw10 0x00000000
ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event
Request
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2
nsid 0 cdw10 0x00000002
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257d7d000 key 0xa8044f4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2
nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2
nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257d7f000 key 0xa8044f4f len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1258192000 key 0xa8056e4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x00000008
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1243239000 key 0xa8056e4d len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x00000018
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257f71000 key 0xa8056e4e len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000

......
 <millions of lines>
......
```

In  this trace, the line such as
```
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x84da42000 key 0xa804ee4c len 0x1000
request.c:  60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000
```
repeats millions of times. As the I/O opc 0x02 is read, it means millions
of read requests executed after client connection. As a comparison, we
profile nvmf with NVMe device on same host and SSD. And merely ~50 read
requests after client connection. So what are these read requests for? And
why there are so many reads in aio devices?

Thanks,
-- 
Meng

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 5629 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-11-28  7:02 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-28  0:46 [SPDK] Large amount of read requests after connection Liu, Changpeng
  -- strict thread matches above, loose matches on Subject: below --
2017-11-28  7:02 Meng Wang
2017-11-28  6:58 Liu, Changpeng
2017-11-28  6:43 Meng Wang
2017-11-28  6:09 Liu, Changpeng
2017-11-28  3:07 Liu, Changpeng
2017-11-28  3:02 Meng Wang
2017-11-28  2:18 Meng Wang
2017-11-27 20:14 Meng Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.