All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] NVMf target configuration issue
@ 2017-09-06  7:28 Kirubakaran Kaliannan
  0 siblings, 0 replies; 18+ messages in thread
From: Kirubakaran Kaliannan @ 2017-09-06  7:28 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8297 bytes --]

Hi,



I have the SPDK target working for AIO and Malloc.



I now have configured NVMe device on the same setup, but not able to start
the target, as the target is not finding the nvme0 bdev.



Do we have any other dependency for configuring Nvme devices ?



*Nvmf.conf*



[Global]



[Rpc]

  Enable No

  Listen 127.0.0.1



[Nvmf]

  MaxQueuesPerSession 4

  AcceptorPollRate 10000



[Nvme]

  TransportId "trtype:PCIe traddr:0000:07:00.0" Nvme0

  RetryCount 4

  Timeout 0

  AdminPollRate 100000

  HotplugEnable Yes



[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 0

  Listen RDMA 10.3.7.2:4420

  AllowAnyHost Yes

  SN SPDK00000000000001

  Namespace Nvme0n1 1



*# app/nvmf_tgt/nvmf_tgt -t all -c /etc/spdk/nvmf.conf *

Starting DPDK 17.05.0 initialization...

[ DPDK EAL parameters: nvmf -c 0x1 --file-prefix=spdk_pid97433 ]

EAL: Detected 20 lcore(s)

EAL: Probing VFIO support...

EAL: VFIO support initialized

Total cores available: 1

Occupied cpu socket mask is 0x1

reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0

EAL: PCI device 0000:00:04.0 on NUMA socket 0

EAL:   probe driver: 8086:6f20 spdk_ioat

EAL: PCI device 0000:00:04.1 on NUMA socket 0

EAL:   probe driver: 8086:6f21 spdk_ioat

EAL: PCI device 0000:00:04.2 on NUMA socket 0

EAL:   probe driver: 8086:6f22 spdk_ioat

EAL: PCI device 0000:00:04.3 on NUMA socket 0

EAL:   probe driver: 8086:6f23 spdk_ioat

EAL: PCI device 0000:00:04.4 on NUMA socket 0

EAL:   probe driver: 8086:6f24 spdk_ioat

EAL: PCI device 0000:00:04.5 on NUMA socket 0

EAL:   probe driver: 8086:6f25 spdk_ioat

EAL: PCI device 0000:00:04.6 on NUMA socket 0

EAL:   probe driver: 8086:6f26 spdk_ioat

EAL: PCI device 0000:00:04.7 on NUMA socket 0

EAL:   probe driver: 8086:6f27 spdk_ioat

copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine
Offload Enabled

EAL: PCI device 0000:00:04.0 on NUMA socket 0

EAL:   probe driver: 8086:6f20 spdk_ioat

EAL: PCI device 0000:00:04.1 on NUMA socket 0

EAL:   probe driver: 8086:6f21 spdk_ioat

EAL: PCI device 0000:00:04.2 on NUMA socket 0

EAL:   probe driver: 8086:6f22 spdk_ioat

EAL: PCI device 0000:00:04.3 on NUMA socket 0

EAL:   probe driver: 8086:6f23 spdk_ioat

EAL: PCI device 0000:00:04.4 on NUMA socket 0

EAL:   probe driver: 8086:6f24 spdk_ioat

EAL: PCI device 0000:00:04.5 on NUMA socket 0

EAL:   probe driver: 8086:6f25 spdk_ioat

EAL: PCI device 0000:00:04.6 on NUMA socket 0

EAL:   probe driver: 8086:6f26 spdk_ioat

EAL: PCI device 0000:00:04.7 on NUMA socket 0

EAL:   probe driver: 8086:6f27 spdk_ioat

EAL: PCI device 0000:07:00.0 on NUMA socket 0

EAL:   probe driver: 8086:953 spdk_nvme

nvmf.c:  67:spdk_nvmf_tgt_init: *INFO*: Max Queue Pairs Per Controller: 4

nvmf.c:  68:spdk_nvmf_tgt_init: *INFO*: Max Queue Depth: 128

nvmf.c:  69:spdk_nvmf_tgt_init: *INFO*: Max In Capsule Data: 4096 bytes

nvmf.c:  70:spdk_nvmf_tgt_init: *INFO*: Max I/O Size: 131072 bytes

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0

transport.c:  63:spdk_nvmf_transport_create: *ERROR*: inside
spdk_nvmf_transport_create 1

rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***

rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on
10.3.7.2 port 4420 ***

bdev.c: 214:spdk_bdev_get_by_name: *ERROR*: bdev's returned null 'Nvme0n1'

*conf.c: 492:spdk_nvmf_construct_subsystem: *ERROR*: Could not find
namespace bdev 'Nvme0n1'*

subsystem.c: 232:spdk_nvmf_delete_subsystem: *INFO*: subsystem is 0x1218ef0

nvmf_tgt.c: 276:spdk_nvmf_startup: *ERROR*: spdk_nvmf_parse_conf() failed

subsystem.c: 232:spdk_nvmf_delete_subsystem: *INFO*: subsystem is 0x1218bf0



*# lspci  |grep -i vola*

07:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center
SSD (rev 01)



Thanks

-kiru







*From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Yang, Ziye
*Sent:* Tuesday, September 05, 2017 10:41 AM
*To:* Storage Performance Development Kit
*Subject:* Re: [SPDK] NVMf target configuration issue



And the other choice is to remove the Host related line, which means any
host can connect.



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>] *On
Behalf Of *Yang, Ziye
*Sent:* Tuesday, September 5, 2017 1:04 PM
*To:* Storage Performance Development Kit <spdk(a)lists.01.org>
*Subject:* Re: [SPDK] NVMf target configuration issue



Hi Kiru,



According to the error message, I suggest that you add the Host info in the
following section.





[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 0

  Listen RDMA 10.3.7.2:4420

  Host nqn.2016-06.io.spdk:init

*                 Host
nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14*

  SN SPDK00000000000001

  Namespace AIO0





Thanks.



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>] *On
Behalf Of *Kirubakaran Kaliannan
*Sent:* Tuesday, September 5, 2017 12:52 PM
*To:* Storage Performance Development Kit <spdk(a)lists.01.org>
*Subject:* [SPDK] NVMf target configuration issue







Hi All,



I am trying to get the nvmf configuration going with SPDK.



I have the following configuration,



*# ofed 2.4 on both target and initiator*

*# SPDK + 3.18 kernel on target *

*# 4.9 kernel on initiator*

*# mellanox connect3X-Pro*



*/etc/spdk/nvmf.conf*



[Global]

[Rpc]

  Enable No

  Listen 127.0.0.1

[AIO]

  AIO /dev/sdd AIO0



[Nvmf]

  MaxQueuesPerSession 4

  AcceptorPollRate 10000



[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 0

  Listen RDMA 10.3.7.2:4420

  Host nqn.2016-06.io.spdk:init

  SN SPDK00000000000001

  Namespace AIO0





*From initiator I tried discover and connect*



*# nvme discover -t rdma -a 10.3.7.2 -s 4420*



Discovery Log Number of Records 1, Generation counter 4

=====Discovery Log Entry 0======

trtype:  rdma

adrfam:  ipv4

subtype: nvme subsystem

treq:    not specified

portid:  0

trsvcid: 4420

subnqn:  nqn.2016-06.io.spdk:cnode1

traddr:  10.3.7.2

rdma_prtype: not specified

rdma_qptype: connected

rdma_cms:    rdma-cm

rdma_pkey: 0x0000



*# nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 10.3.7.2 -s 4420*

Failed to write to /dev/nvme-fabrics: Input/output error



[dmesg]

[59909.541392] nvme nvme0: new ctrl: NQN
"nqn.2014-08.org.nvmexpress.discovery", addr 10.3.7.2:4420

[59940.389952] nvme nvme0: Connect command failed, error wo/DNR bit: 388



*On target I get the following error*



# app/nvmf_tgt/nvmf_tgt -c /etc/spdk/nvmf.conf  > /tmp/1

EAL: Detected 20 lcore(s)

reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0

copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine
Offload Enabled

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0

rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***

rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on
10.3.7.2 port 4420 ***

conf.c: 500:spdk_nvmf_construct_subsystem: *NOTICE*: Attaching block device
AIO0 to subsystem nqn.2016-06.io.spdk:cnode1

nvmf_tgt.c: 290:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on
socket 0

*request.c: 171:nvmf_process_connect: *ERROR*: Subsystem
'nqn.2016-06.io.spdk:cnode1' does not allow host
'nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14'*





can you please help on why I am not able to connect in this configuration.



Regards,

-kiru

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 21789 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-18  6:17 Gyan Prakash
  0 siblings, 0 replies; 18+ messages in thread
From: Gyan Prakash @ 2017-10-18  6:17 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10286 bytes --]

Jim,

I will also run scripts/pkgdep.sh on my setup.

Thanks,
Gyan

On Tue, Oct 17, 2017 at 5:41 PM, Harris, James R <james.r.harris(a)intel.com>
wrote:

> Hi Gyan,
>
>
>
> libnuma/numactl is a recently added package dependency.  You can run
> scripts/pkgdep.sh or just install it explicitly – numactl-devel on
> RHEL/CentOS or  libnuma-dev on Ubuntu/Debian.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Gyan Prakash <
> gyapra2016(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Tuesday, October 17, 2017 at 5:36 PM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] NVMf Target configuration issue
>
>
>
> Hi all,
>
> I got the latest master of SPDK and getting following error:
>
> /root/test-spdk/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18:
> fatal error: numa.h: No such file or directory
>  #include <numa.h>
>                   ^
> compilation terminated.
>   CC eal_timer.o
>   CC eal_interrupts.o
>   CC eal_alarm.o
> make[8]: *** [eal_memory.o] Error 1
> make[8]: *** Waiting for unfinished jobs....
> make[7]: *** [eal] Error 2
> make[6]: *** [linuxapp] Error 2
> make[5]: *** [librte_eal] Error 2
> make[4]: *** [lib] Error 2
> make[3]: *** [all] Error 2
> make[3]: Leaving directory `/root/test-spdk/spdk/dpdk'
> make[2]: *** [all] Error 2
> make[1]: *** [all] Error 2
> make: *** [dpdkbuild] Error 2
>
> I am using the in-built DPDK and I do not see the file "numa.h" anywhere.
> Am I supposed to get something installed on my system to get the numa.h?
>
> I used "make CONFIG_RDMA=y CONFIG_DEBUG=y" as my build command.
>
>
>
> Thanks & regards,
>
> Gyan
>
>
>
>
>
> On Tue, Oct 17, 2017 at 1:03 PM, Gyan Prakash <gyapra2016(a)gmail.com>
> wrote:
>
> Hi Daniel,
>
>
>
> Thanks for the reply. You are correct, I am using SPDK v17.03. I will test
> with SPDK v17.07.
>
>
>
> I will also check the referenced documents for the configuration.
>
>
>
> Thanks,
>
> Gyan
>
>
>
> On Tue, Oct 17, 2017 at 12:54 PM, Verkamp, Daniel <
> daniel.verkamp(a)intel.com> wrote:
>
> Hi Gyan,
>
>
>
> It looks like you are using an older version of SPDK before the NVMe-oF
> target was changed to always use the bdev abstraction layer.
>
>
>
> Can you retest with the latest master, or at least SPDK v17.07?  To do
> this, you will need to update your nvmf.conf to specify the NVMe devices
> you want to use as bdevs – see the NVMe bdev configuration docs [1] and
> NVMe-oF target Getting Started guide [2] for more details.
>
>
>
> Thanks,
>
> -- Daniel
>
>
>
> [1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
>
> [2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
> Prakash
> *Sent:* Tuesday, October 17, 2017 12:05 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] NVMf Target configuration issue
>
>
> Hi all,
>
> I see issue with NVMf Target configuration. I also see the thread  *[SPDK]
> NVMf target configuration issue **Thu Sep 7 01:29:33 PDT 2017 *which is
> for the similar issue what I am seeing. I tried the suggestion from that
> thread, but it was not helpful for my problem.
>
>
>
> * Error Message:*
>
> * # ./nvmf_tgt -t all -c nvmf.conf*
>
> Starting DPDK 17.02.0 initialization...
>
> [ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
>
> EAL: Detected 12 lcore(s)
>
> EAL: No free hugepages reported in hugepages-1048576kB
>
> EAL: Probing VFIO support...
>
> Occupied cpu core mask is 0xfff
>
> Occupied cpu socket mask is 0x1
>
> EAL: PCI device 0000:00:04.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f20 spdk_ioat
>
>  Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
>
> EAL: PCI device 0000:00:04.1 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f21 spdk_ioat
>
>  Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
>
> EAL: PCI device 0000:00:04.2 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f22 spdk_ioat
>
>  Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
>
> EAL: PCI device 0000:00:04.3 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f23 spdk_ioat
>
>  Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
>
> EAL: PCI device 0000:00:04.4 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f24 spdk_ioat
>
>  Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
>
> EAL: PCI device 0000:00:04.5 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f25 spdk_ioat
>
>  Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
>
> EAL: PCI device 0000:00:04.6 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f26 spdk_ioat
>
>  Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
>
> EAL: PCI device 0000:00:04.7 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f27 spdk_ioat
>
>  Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
>
> Ioat Copy Engine Offload Enabled
>
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:953 spdk_nvme
>
> Probing device 0000:01:00.0
>
> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no
> timeout)
>
> [nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
>
> [nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
>
> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable
> and wait for CSTS.RDY = 0 (timeout 20000 ms)
>
> [nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY =
> 0 - enabling controller
>
> [nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
>
> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable and
> wait for CSTS.RDY = 1 (timeout 20000 ms)
>
> [nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY =
> 1 - controller is ready
>
> [nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size
> 2072576
>
> [nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
>
> [nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller
> KAS is 0 - not enabling Keep Alive
>
> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready (no
> timeout)
>
> [debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
>
> Total cores available: 12
>
> Reactor started on core 1 on socket 0
>
> Reactor started on core 2 on socket 0
>
> Reactor started on core 3 on socket 0
>
> Reactor started on core 4 on socket 0
>
> Reactor started on core 5 on socket 0
>
> Reactor started on core 6 on socket 0
>
> Reactor started on core 7 on socket 0
>
> Reactor started on core 8 on socket 0
>
> Reactor started on core 9 on socket 0
>
> Reactor started on core 10 on socket 0
>
> Reactor started on core 0 on socket 0
>
> [nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
>
> [nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
>
> [nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
>
> [nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
>
> *** RDMA Transport Init ***
>
> Reactor started on core 11 on socket 0
>
> *allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on
> socket 0*
>
> *allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0*
>
> *[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with
> context 0x1720db0, created completion channel 0x1720ad0*
>
> *conf.c: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem
> nqn.2016-06.io.spdk:cnode1: missing NVMe directive*
>
> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
> 0x1720b20*
>
> *nvmf_tgt.c: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf()
> failed*
>
> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
> 0x17205b0*
>
> *[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete*
>
>
>
>
>
>
>
>
>
>
>
> *# ./setup.sh*
>
> 0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
>
> 0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
>
> 0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
>
> 0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
>
> 0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
>
> 0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
>
> 0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
>
> 0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
>
> 0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic
>
>
>
> *# lspci | grep -i vola**
>
> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center
> SSD (rev 01)
>
>
>
>
>
>
>
> *# lspci -v -s 0000:01:00.0*
>
> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center
> SSD (rev 01) (prog-if 02 [NVM Express])
>
>         Subsystem: Intel Corporation DC P3700 SSD
>
>         Physical Slot: 1
>
>         Flags: fast devsel, IRQ 24, NUMA node 0
>
>         Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
>
>         Expansion ROM at fb400000 [disabled] [size=64K]
>
>         Capabilities: [40] Power Management version 3
>
>         Capabilities: [50] MSI-X: Enable- Count=32 Masked-
>
>         Capabilities: [60] Express Endpoint, MSI 00
>
>         Capabilities: [100] Advanced Error Reporting
>
>         Capabilities: [150] Virtual Channel
>
>         Capabilities: [180] Power Budgeting <?>
>
>         Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
>
>         Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
>
>         Capabilities: [2a0] #19
>
>         Kernel driver in use: uio_pci_generic
>
>         Kernel modules: nvme
>
>
>
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 31679 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-18  1:17 Gyan Prakash
  0 siblings, 0 replies; 18+ messages in thread
From: Gyan Prakash @ 2017-10-18  1:17 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 11854 bytes --]

sorry, my mistake, had to build dpdk, found the reference in another
thread.

Now I could build spdk, will try to use it.

Thanks Daniel, Liu, and everyone,
Gyan

On Tue, Oct 17, 2017 at 6:06 PM, Gyan Prakash <gyapra2016(a)gmail.com> wrote:

> I installed numactl-devel and linuuid-devel and it can get numa.h and
> uuid.h but failing for rte_memory.h now, pl see below.
>
> Can someone help me please?
>
> In file included from bdev_virtio.c:50:0:
> rte_virtio/virtio_dev.h:43:24: fatal error: rte_memory.h: No such file or
> directory
>  #include <rte_memory.h>
>                         ^
> compilation terminated.
> make[3]: *** [bdev_virtio.o] Error 1
> make[2]: *** [virtio] Error 2
> make[1]: *** [bdev] Error 2
> make: *** [lib] Error 2
>
> Thanks,
> Gyan
>
>
> On Tue, Oct 17, 2017 at 5:54 PM, Gyan Prakash <gyapra2016(a)gmail.com>
> wrote:
>
>> Hi Liu,
>> Thanks, I will  install numactl-devel or libnuma-devel on my system and
>> retry it.
>>
>> Thanks again,
>> Gyan
>>
>> On Tue, Oct 17, 2017 at 5:51 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
>> wrote:
>>
>>> Hi Gyan,
>>>
>>>
>>>
>>> numa.h is one of system header files, maybe you can try to install
>>> numactl-devel or libnuma-devel in your system.
>>>
>>>
>>>
>>> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
>>> Prakash
>>> *Sent:* Wednesday, October 18, 2017 8:37 AM
>>> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
>>> *Subject:* Re: [SPDK] NVMf Target configuration issue
>>>
>>>
>>>
>>> Hi all,
>>>
>>> I got the latest master of SPDK and getting following error:
>>>
>>> /root/test-spdk/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18:
>>> fatal error: numa.h: No such file or directory
>>>  #include <numa.h>
>>>                   ^
>>> compilation terminated.
>>>   CC eal_timer.o
>>>   CC eal_interrupts.o
>>>   CC eal_alarm.o
>>> make[8]: *** [eal_memory.o] Error 1
>>> make[8]: *** Waiting for unfinished jobs....
>>> make[7]: *** [eal] Error 2
>>> make[6]: *** [linuxapp] Error 2
>>> make[5]: *** [librte_eal] Error 2
>>> make[4]: *** [lib] Error 2
>>> make[3]: *** [all] Error 2
>>> make[3]: Leaving directory `/root/test-spdk/spdk/dpdk'
>>> make[2]: *** [all] Error 2
>>> make[1]: *** [all] Error 2
>>> make: *** [dpdkbuild] Error 2
>>>
>>> I am using the in-built DPDK and I do not see the file "numa.h"
>>> anywhere. Am I supposed to get something installed on my system to get the
>>> numa.h?
>>>
>>> I used "make CONFIG_RDMA=y CONFIG_DEBUG=y" as my build command.
>>>
>>>
>>>
>>> Thanks & regards,
>>>
>>> Gyan
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Oct 17, 2017 at 1:03 PM, Gyan Prakash <gyapra2016(a)gmail.com>
>>> wrote:
>>>
>>> Hi Daniel,
>>>
>>>
>>>
>>> Thanks for the reply. You are correct, I am using SPDK v17.03. I will
>>> test with SPDK v17.07.
>>>
>>>
>>>
>>> I will also check the referenced documents for the configuration.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Gyan
>>>
>>>
>>>
>>> On Tue, Oct 17, 2017 at 12:54 PM, Verkamp, Daniel <
>>> daniel.verkamp(a)intel.com> wrote:
>>>
>>> Hi Gyan,
>>>
>>>
>>>
>>> It looks like you are using an older version of SPDK before the NVMe-oF
>>> target was changed to always use the bdev abstraction layer.
>>>
>>>
>>>
>>> Can you retest with the latest master, or at least SPDK v17.07?  To do
>>> this, you will need to update your nvmf.conf to specify the NVMe
>>> devices you want to use as bdevs – see the NVMe bdev configuration docs
>>> [1] and NVMe-oF target Getting Started guide [2] for more details.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> -- Daniel
>>>
>>>
>>>
>>> [1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
>>>
>>> [2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started
>>>
>>>
>>>
>>> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
>>> Prakash
>>> *Sent:* Tuesday, October 17, 2017 12:05 PM
>>> *To:* spdk(a)lists.01.org
>>> *Subject:* [SPDK] NVMf Target configuration issue
>>>
>>>
>>> *Hi all,*
>>>
>>> I see issue with NVMf Target configuration. I also see the thread  *[SPDK]
>>> NVMf target configuration issue **Thu Sep 7 01:29:33 PDT 2017 *which is
>>> for the similar issue what I am seeing. I tried the suggestion from that
>>> thread, but it was not helpful for my problem.
>>>
>>>
>>>
>>> * Error Message:*
>>>
>>> * # ./nvmf_tgt -t all -c nvmf.conf*
>>>
>>> Starting DPDK 17.02.0 initialization...
>>>
>>> [ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
>>>
>>> EAL: Detected 12 lcore(s)
>>>
>>> EAL: No free hugepages reported in hugepages-1048576kB
>>>
>>> EAL: Probing VFIO support...
>>>
>>> Occupied cpu core mask is 0xfff
>>>
>>> Occupied cpu socket mask is 0x1
>>>
>>> EAL: PCI device 0000:00:04.0 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f20 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
>>>
>>> EAL: PCI device 0000:00:04.1 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f21 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
>>>
>>> EAL: PCI device 0000:00:04.2 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f22 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
>>>
>>> EAL: PCI device 0000:00:04.3 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f23 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
>>>
>>> EAL: PCI device 0000:00:04.4 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f24 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
>>>
>>> EAL: PCI device 0000:00:04.5 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f25 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
>>>
>>> EAL: PCI device 0000:00:04.6 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f26 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
>>>
>>> EAL: PCI device 0000:00:04.7 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f27 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
>>>
>>> Ioat Copy Engine Offload Enabled
>>>
>>> EAL: PCI device 0000:01:00.0 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:953 spdk_nvme
>>>
>>> Probing device 0000:01:00.0
>>>
>>> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init
>>> (no timeout)
>>>
>>> [nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
>>>
>>> [nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
>>>
>>> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable
>>> and wait for CSTS.RDY = 0 (timeout 20000 ms)
>>>
>>> [nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY
>>> = 0 - enabling controller
>>>
>>> [nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
>>>
>>> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable
>>> and wait for CSTS.RDY = 1 (timeout 20000 ms)
>>>
>>> [nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY
>>> = 1 - controller is ready
>>>
>>> [nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size
>>> 2072576
>>>
>>> [nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
>>>
>>> [nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller
>>> KAS is 0 - not enabling Keep Alive
>>>
>>> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready
>>> (no timeout)
>>>
>>> [debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
>>>
>>> Total cores available: 12
>>>
>>> Reactor started on core 1 on socket 0
>>>
>>> Reactor started on core 2 on socket 0
>>>
>>> Reactor started on core 3 on socket 0
>>>
>>> Reactor started on core 4 on socket 0
>>>
>>> Reactor started on core 5 on socket 0
>>>
>>> Reactor started on core 6 on socket 0
>>>
>>> Reactor started on core 7 on socket 0
>>>
>>> Reactor started on core 8 on socket 0
>>>
>>> Reactor started on core 9 on socket 0
>>>
>>> Reactor started on core 10 on socket 0
>>>
>>> Reactor started on core 0 on socket 0
>>>
>>> [nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
>>>
>>> [nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
>>>
>>> [nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
>>>
>>> [nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
>>>
>>> *** RDMA Transport Init ***
>>>
>>> Reactor started on core 11 on socket 0
>>>
>>> *allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on
>>> socket 0*
>>>
>>> *allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0*
>>>
>>> *[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with
>>> context 0x1720db0, created completion channel 0x1720ad0*
>>>
>>> *conf.c**: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem
>>> nqn.2016-06.io.spdk:cnode1: missing NVMe directive*
>>>
>>> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
>>> 0x1720b20*
>>>
>>> *nvmf_tgt.c**: 313:spdk_nvmf_startup: ***ERROR***
>>> spdk_nvmf_parse_conf() failed*
>>>
>>> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
>>> 0x17205b0*
>>>
>>> *[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *# ./setup.sh*
>>>
>>> 0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
>>>
>>> 0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic
>>>
>>>
>>>
>>> *# lspci | grep -i vola**
>>>
>>> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
>>> Center SSD (rev 01)
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *# lspci -v -s 0000:01:00.0*
>>>
>>> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
>>> Center SSD (rev 01) (prog-if 02 [NVM Express])
>>>
>>>         Subsystem: Intel Corporation DC P3700 SSD
>>>
>>>         Physical Slot: 1
>>>
>>>         Flags: fast devsel, IRQ 24, NUMA node 0
>>>
>>>         Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
>>>
>>>         Expansion ROM at fb400000 [disabled] [size=64K]
>>>
>>>         Capabilities: [40] Power Management version 3
>>>
>>>         Capabilities: [50] MSI-X: Enable- Count=32 Masked-
>>>
>>>         Capabilities: [60] Express Endpoint, MSI 00
>>>
>>>         Capabilities: [100] Advanced Error Reporting
>>>
>>>         Capabilities: [150] Virtual Channel
>>>
>>>         Capabilities: [180] Power Budgeting <?>
>>>
>>>         Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
>>>
>>>         Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
>>>
>>>         Capabilities: [2a0] #19
>>>
>>>         Kernel driver in use: uio_pci_generic
>>>
>>>         Kernel modules: nvme
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> SPDK mailing list
>>> SPDK(a)lists.01.org
>>> https://lists.01.org/mailman/listinfo/spdk
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> SPDK mailing list
>>> SPDK(a)lists.01.org
>>> https://lists.01.org/mailman/listinfo/spdk
>>>
>>>
>>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 56198 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-18  1:06 Gyan Prakash
  0 siblings, 0 replies; 18+ messages in thread
From: Gyan Prakash @ 2017-10-18  1:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 11154 bytes --]

I installed numactl-devel and linuuid-devel and it can get numa.h and
uuid.h but failing for rte_memory.h now, pl see below.

Can someone help me please?

In file included from bdev_virtio.c:50:0:
rte_virtio/virtio_dev.h:43:24: fatal error: rte_memory.h: No such file or
directory
 #include <rte_memory.h>
                        ^
compilation terminated.
make[3]: *** [bdev_virtio.o] Error 1
make[2]: *** [virtio] Error 2
make[1]: *** [bdev] Error 2
make: *** [lib] Error 2

Thanks,
Gyan


On Tue, Oct 17, 2017 at 5:54 PM, Gyan Prakash <gyapra2016(a)gmail.com> wrote:

> Hi Liu,
> Thanks, I will  install numactl-devel or libnuma-devel on my system and
> retry it.
>
> Thanks again,
> Gyan
>
> On Tue, Oct 17, 2017 at 5:51 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
> wrote:
>
>> Hi Gyan,
>>
>>
>>
>> numa.h is one of system header files, maybe you can try to install
>> numactl-devel or libnuma-devel in your system.
>>
>>
>>
>> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
>> Prakash
>> *Sent:* Wednesday, October 18, 2017 8:37 AM
>> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject:* Re: [SPDK] NVMf Target configuration issue
>>
>>
>>
>> Hi all,
>>
>> I got the latest master of SPDK and getting following error:
>>
>> /root/test-spdk/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18:
>> fatal error: numa.h: No such file or directory
>>  #include <numa.h>
>>                   ^
>> compilation terminated.
>>   CC eal_timer.o
>>   CC eal_interrupts.o
>>   CC eal_alarm.o
>> make[8]: *** [eal_memory.o] Error 1
>> make[8]: *** Waiting for unfinished jobs....
>> make[7]: *** [eal] Error 2
>> make[6]: *** [linuxapp] Error 2
>> make[5]: *** [librte_eal] Error 2
>> make[4]: *** [lib] Error 2
>> make[3]: *** [all] Error 2
>> make[3]: Leaving directory `/root/test-spdk/spdk/dpdk'
>> make[2]: *** [all] Error 2
>> make[1]: *** [all] Error 2
>> make: *** [dpdkbuild] Error 2
>>
>> I am using the in-built DPDK and I do not see the file "numa.h"
>> anywhere. Am I supposed to get something installed on my system to get the
>> numa.h?
>>
>> I used "make CONFIG_RDMA=y CONFIG_DEBUG=y" as my build command.
>>
>>
>>
>> Thanks & regards,
>>
>> Gyan
>>
>>
>>
>>
>>
>> On Tue, Oct 17, 2017 at 1:03 PM, Gyan Prakash <gyapra2016(a)gmail.com>
>> wrote:
>>
>> Hi Daniel,
>>
>>
>>
>> Thanks for the reply. You are correct, I am using SPDK v17.03. I will
>> test with SPDK v17.07.
>>
>>
>>
>> I will also check the referenced documents for the configuration.
>>
>>
>>
>> Thanks,
>>
>> Gyan
>>
>>
>>
>> On Tue, Oct 17, 2017 at 12:54 PM, Verkamp, Daniel <
>> daniel.verkamp(a)intel.com> wrote:
>>
>> Hi Gyan,
>>
>>
>>
>> It looks like you are using an older version of SPDK before the NVMe-oF
>> target was changed to always use the bdev abstraction layer.
>>
>>
>>
>> Can you retest with the latest master, or at least SPDK v17.07?  To do
>> this, you will need to update your nvmf.conf to specify the NVMe devices
>> you want to use as bdevs – see the NVMe bdev configuration docs [1] and
>> NVMe-oF target Getting Started guide [2] for more details.
>>
>>
>>
>> Thanks,
>>
>> -- Daniel
>>
>>
>>
>> [1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
>>
>> [2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started
>>
>>
>>
>> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
>> Prakash
>> *Sent:* Tuesday, October 17, 2017 12:05 PM
>> *To:* spdk(a)lists.01.org
>> *Subject:* [SPDK] NVMf Target configuration issue
>>
>>
>> *Hi all,*
>>
>> I see issue with NVMf Target configuration. I also see the thread  *[SPDK]
>> NVMf target configuration issue **Thu Sep 7 01:29:33 PDT 2017 *which is
>> for the similar issue what I am seeing. I tried the suggestion from that
>> thread, but it was not helpful for my problem.
>>
>>
>>
>> * Error Message:*
>>
>> * # ./nvmf_tgt -t all -c nvmf.conf*
>>
>> Starting DPDK 17.02.0 initialization...
>>
>> [ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
>>
>> EAL: Detected 12 lcore(s)
>>
>> EAL: No free hugepages reported in hugepages-1048576kB
>>
>> EAL: Probing VFIO support...
>>
>> Occupied cpu core mask is 0xfff
>>
>> Occupied cpu socket mask is 0x1
>>
>> EAL: PCI device 0000:00:04.0 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f20 spdk_ioat
>>
>>  Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
>>
>> EAL: PCI device 0000:00:04.1 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f21 spdk_ioat
>>
>>  Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
>>
>> EAL: PCI device 0000:00:04.2 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f22 spdk_ioat
>>
>>  Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
>>
>> EAL: PCI device 0000:00:04.3 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f23 spdk_ioat
>>
>>  Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
>>
>> EAL: PCI device 0000:00:04.4 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f24 spdk_ioat
>>
>>  Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
>>
>> EAL: PCI device 0000:00:04.5 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f25 spdk_ioat
>>
>>  Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
>>
>> EAL: PCI device 0000:00:04.6 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f26 spdk_ioat
>>
>>  Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
>>
>> EAL: PCI device 0000:00:04.7 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f27 spdk_ioat
>>
>>  Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
>>
>> Ioat Copy Engine Offload Enabled
>>
>> EAL: PCI device 0000:01:00.0 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:953 spdk_nvme
>>
>> Probing device 0000:01:00.0
>>
>> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no
>> timeout)
>>
>> [nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
>>
>> [nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
>>
>> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable
>> and wait for CSTS.RDY = 0 (timeout 20000 ms)
>>
>> [nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY
>> = 0 - enabling controller
>>
>> [nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
>>
>> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable
>> and wait for CSTS.RDY = 1 (timeout 20000 ms)
>>
>> [nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY
>> = 1 - controller is ready
>>
>> [nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size
>> 2072576
>>
>> [nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
>>
>> [nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller
>> KAS is 0 - not enabling Keep Alive
>>
>> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready
>> (no timeout)
>>
>> [debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
>>
>> Total cores available: 12
>>
>> Reactor started on core 1 on socket 0
>>
>> Reactor started on core 2 on socket 0
>>
>> Reactor started on core 3 on socket 0
>>
>> Reactor started on core 4 on socket 0
>>
>> Reactor started on core 5 on socket 0
>>
>> Reactor started on core 6 on socket 0
>>
>> Reactor started on core 7 on socket 0
>>
>> Reactor started on core 8 on socket 0
>>
>> Reactor started on core 9 on socket 0
>>
>> Reactor started on core 10 on socket 0
>>
>> Reactor started on core 0 on socket 0
>>
>> [nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
>>
>> [nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
>>
>> [nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
>>
>> [nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
>>
>> *** RDMA Transport Init ***
>>
>> Reactor started on core 11 on socket 0
>>
>> *allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on
>> socket 0*
>>
>> *allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0*
>>
>> *[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with
>> context 0x1720db0, created completion channel 0x1720ad0*
>>
>> *conf.c**: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem
>> nqn.2016-06.io.spdk:cnode1: missing NVMe directive*
>>
>> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
>> 0x1720b20*
>>
>> *nvmf_tgt.c**: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf()
>> failed*
>>
>> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
>> 0x17205b0*
>>
>> *[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *# ./setup.sh*
>>
>> 0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
>>
>> 0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic
>>
>>
>>
>> *# lspci | grep -i vola**
>>
>> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
>> Center SSD (rev 01)
>>
>>
>>
>>
>>
>>
>>
>> *# lspci -v -s 0000:01:00.0*
>>
>> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
>> Center SSD (rev 01) (prog-if 02 [NVM Express])
>>
>>         Subsystem: Intel Corporation DC P3700 SSD
>>
>>         Physical Slot: 1
>>
>>         Flags: fast devsel, IRQ 24, NUMA node 0
>>
>>         Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
>>
>>         Expansion ROM at fb400000 [disabled] [size=64K]
>>
>>         Capabilities: [40] Power Management version 3
>>
>>         Capabilities: [50] MSI-X: Enable- Count=32 Masked-
>>
>>         Capabilities: [60] Express Endpoint, MSI 00
>>
>>         Capabilities: [100] Advanced Error Reporting
>>
>>         Capabilities: [150] Virtual Channel
>>
>>         Capabilities: [180] Power Budgeting <?>
>>
>>         Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
>>
>>         Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
>>
>>         Capabilities: [2a0] #19
>>
>>         Kernel driver in use: uio_pci_generic
>>
>>         Kernel modules: nvme
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>>
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 52458 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-18  0:54 Gyan Prakash
  0 siblings, 0 replies; 18+ messages in thread
From: Gyan Prakash @ 2017-10-18  0:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10146 bytes --]

Hi Liu,
Thanks, I will  install numactl-devel or libnuma-devel on my system and
retry it.

Thanks again,
Gyan

On Tue, Oct 17, 2017 at 5:51 PM, Liu, Changpeng <changpeng.liu(a)intel.com>
wrote:

> Hi Gyan,
>
>
>
> numa.h is one of system header files, maybe you can try to install
> numactl-devel or libnuma-devel in your system.
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
> Prakash
> *Sent:* Wednesday, October 18, 2017 8:37 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] NVMf Target configuration issue
>
>
>
> Hi all,
>
> I got the latest master of SPDK and getting following error:
>
> /root/test-spdk/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18:
> fatal error: numa.h: No such file or directory
>  #include <numa.h>
>                   ^
> compilation terminated.
>   CC eal_timer.o
>   CC eal_interrupts.o
>   CC eal_alarm.o
> make[8]: *** [eal_memory.o] Error 1
> make[8]: *** Waiting for unfinished jobs....
> make[7]: *** [eal] Error 2
> make[6]: *** [linuxapp] Error 2
> make[5]: *** [librte_eal] Error 2
> make[4]: *** [lib] Error 2
> make[3]: *** [all] Error 2
> make[3]: Leaving directory `/root/test-spdk/spdk/dpdk'
> make[2]: *** [all] Error 2
> make[1]: *** [all] Error 2
> make: *** [dpdkbuild] Error 2
>
> I am using the in-built DPDK and I do not see the file "numa.h" anywhere.
> Am I supposed to get something installed on my system to get the numa.h?
>
> I used "make CONFIG_RDMA=y CONFIG_DEBUG=y" as my build command.
>
>
>
> Thanks & regards,
>
> Gyan
>
>
>
>
>
> On Tue, Oct 17, 2017 at 1:03 PM, Gyan Prakash <gyapra2016(a)gmail.com>
> wrote:
>
> Hi Daniel,
>
>
>
> Thanks for the reply. You are correct, I am using SPDK v17.03. I will test
> with SPDK v17.07.
>
>
>
> I will also check the referenced documents for the configuration.
>
>
>
> Thanks,
>
> Gyan
>
>
>
> On Tue, Oct 17, 2017 at 12:54 PM, Verkamp, Daniel <
> daniel.verkamp(a)intel.com> wrote:
>
> Hi Gyan,
>
>
>
> It looks like you are using an older version of SPDK before the NVMe-oF
> target was changed to always use the bdev abstraction layer.
>
>
>
> Can you retest with the latest master, or at least SPDK v17.07?  To do
> this, you will need to update your nvmf.conf to specify the NVMe devices
> you want to use as bdevs – see the NVMe bdev configuration docs [1] and
> NVMe-oF target Getting Started guide [2] for more details.
>
>
>
> Thanks,
>
> -- Daniel
>
>
>
> [1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
>
> [2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
> Prakash
> *Sent:* Tuesday, October 17, 2017 12:05 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] NVMf Target configuration issue
>
>
> *Hi all,*
>
> I see issue with NVMf Target configuration. I also see the thread  *[SPDK]
> NVMf target configuration issue **Thu Sep 7 01:29:33 PDT 2017 *which is
> for the similar issue what I am seeing. I tried the suggestion from that
> thread, but it was not helpful for my problem.
>
>
>
> * Error Message:*
>
> * # ./nvmf_tgt -t all -c nvmf.conf*
>
> Starting DPDK 17.02.0 initialization...
>
> [ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
>
> EAL: Detected 12 lcore(s)
>
> EAL: No free hugepages reported in hugepages-1048576kB
>
> EAL: Probing VFIO support...
>
> Occupied cpu core mask is 0xfff
>
> Occupied cpu socket mask is 0x1
>
> EAL: PCI device 0000:00:04.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f20 spdk_ioat
>
>  Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
>
> EAL: PCI device 0000:00:04.1 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f21 spdk_ioat
>
>  Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
>
> EAL: PCI device 0000:00:04.2 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f22 spdk_ioat
>
>  Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
>
> EAL: PCI device 0000:00:04.3 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f23 spdk_ioat
>
>  Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
>
> EAL: PCI device 0000:00:04.4 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f24 spdk_ioat
>
>  Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
>
> EAL: PCI device 0000:00:04.5 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f25 spdk_ioat
>
>  Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
>
> EAL: PCI device 0000:00:04.6 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f26 spdk_ioat
>
>  Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
>
> EAL: PCI device 0000:00:04.7 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f27 spdk_ioat
>
>  Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
>
> Ioat Copy Engine Offload Enabled
>
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:953 spdk_nvme
>
> Probing device 0000:01:00.0
>
> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no
> timeout)
>
> [nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
>
> [nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
>
> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable
> and wait for CSTS.RDY = 0 (timeout 20000 ms)
>
> [nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY =
> 0 - enabling controller
>
> [nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
>
> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable
> and wait for CSTS.RDY = 1 (timeout 20000 ms)
>
> [nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY =
> 1 - controller is ready
>
> [nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size
> 2072576
>
> [nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
>
> [nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller
> KAS is 0 - not enabling Keep Alive
>
> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready (no
> timeout)
>
> [debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
>
> Total cores available: 12
>
> Reactor started on core 1 on socket 0
>
> Reactor started on core 2 on socket 0
>
> Reactor started on core 3 on socket 0
>
> Reactor started on core 4 on socket 0
>
> Reactor started on core 5 on socket 0
>
> Reactor started on core 6 on socket 0
>
> Reactor started on core 7 on socket 0
>
> Reactor started on core 8 on socket 0
>
> Reactor started on core 9 on socket 0
>
> Reactor started on core 10 on socket 0
>
> Reactor started on core 0 on socket 0
>
> [nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
>
> [nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
>
> [nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
>
> [nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
>
> *** RDMA Transport Init ***
>
> Reactor started on core 11 on socket 0
>
> *allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on
> socket 0*
>
> *allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0*
>
> *[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with
> context 0x1720db0, created completion channel 0x1720ad0*
>
> *conf.c**: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem
> nqn.2016-06.io.spdk:cnode1: missing NVMe directive*
>
> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
> 0x1720b20*
>
> *nvmf_tgt.c**: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf()
> failed*
>
> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
> 0x17205b0*
>
> *[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete*
>
>
>
>
>
>
>
>
>
>
>
> *# ./setup.sh*
>
> 0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
>
> 0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
>
> 0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
>
> 0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
>
> 0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
>
> 0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
>
> 0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
>
> 0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
>
> 0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic
>
>
>
> *# lspci | grep -i vola**
>
> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
> Center SSD (rev 01)
>
>
>
>
>
>
>
> *# lspci -v -s 0000:01:00.0*
>
> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
> Center SSD (rev 01) (prog-if 02 [NVM Express])
>
>         Subsystem: Intel Corporation DC P3700 SSD
>
>         Physical Slot: 1
>
>         Flags: fast devsel, IRQ 24, NUMA node 0
>
>         Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
>
>         Expansion ROM at fb400000 [disabled] [size=64K]
>
>         Capabilities: [40] Power Management version 3
>
>         Capabilities: [50] MSI-X: Enable- Count=32 Masked-
>
>         Capabilities: [60] Express Endpoint, MSI 00
>
>         Capabilities: [100] Advanced Error Reporting
>
>         Capabilities: [150] Virtual Channel
>
>         Capabilities: [180] Power Budgeting <?>
>
>         Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
>
>         Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
>
>         Capabilities: [2a0] #19
>
>         Kernel driver in use: uio_pci_generic
>
>         Kernel modules: nvme
>
>
>
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 48152 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-18  0:51 Gyan Prakash
  0 siblings, 0 replies; 18+ messages in thread
From: Gyan Prakash @ 2017-10-18  0:51 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10600 bytes --]

Seems like I am missing something on my computer, using CentOS 7.3 built-in
kernel.

I see more build errors:
In file included from vbdev_lvol.h:37:0,
                 from vbdev_lvol.c:39:
/root/test-spdk/spdk/include/spdk/lvol.h:43:23: fatal error: uuid/uuid.h:
No such file or directory
 #include <uuid/uuid.h>
                       ^
compilation terminated.
make[3]: *** [vbdev_lvol.o] Error 1
make[2]: *** [lvol] Error 2
make[1]: *** [bdev] Error 2
make: *** [lib] Error 2

I did not see these errors with 17.03 version on the same system.

Thanks,
Gyan

On Tue, Oct 17, 2017 at 5:36 PM, Gyan Prakash <gyapra2016(a)gmail.com> wrote:

> Hi all,
>
> I got the latest master of SPDK and getting following error:
>
> /root/test-spdk/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18:
> fatal error: numa.h: No such file or directory
>  #include <numa.h>
>                   ^
> compilation terminated.
>   CC eal_timer.o
>   CC eal_interrupts.o
>   CC eal_alarm.o
> make[8]: *** [eal_memory.o] Error 1
> make[8]: *** Waiting for unfinished jobs....
> make[7]: *** [eal] Error 2
> make[6]: *** [linuxapp] Error 2
> make[5]: *** [librte_eal] Error 2
> make[4]: *** [lib] Error 2
> make[3]: *** [all] Error 2
> make[3]: Leaving directory `/root/test-spdk/spdk/dpdk'
> make[2]: *** [all] Error 2
> make[1]: *** [all] Error 2
> make: *** [dpdkbuild] Error 2
>
>
> I am using the in-built DPDK and I do not see the file "numa.h" anywhere.
> Am I supposed to get something installed on my system to get the numa.h?
>
> I used "make CONFIG_RDMA=y CONFIG_DEBUG=y" as my build command.
>
> Thanks & regards,
> Gyan
>
>
> On Tue, Oct 17, 2017 at 1:03 PM, Gyan Prakash <gyapra2016(a)gmail.com>
> wrote:
>
>> Hi Daniel,
>>
>> Thanks for the reply. You are correct, I am using SPDK v17.03. I will
>> test with SPDK v17.07.
>>
>> I will also check the referenced documents for the configuration.
>>
>> Thanks,
>> Gyan
>>
>> On Tue, Oct 17, 2017 at 12:54 PM, Verkamp, Daniel <
>> daniel.verkamp(a)intel.com> wrote:
>>
>>> Hi Gyan,
>>>
>>>
>>>
>>> It looks like you are using an older version of SPDK before the NVMe-oF
>>> target was changed to always use the bdev abstraction layer.
>>>
>>>
>>>
>>> Can you retest with the latest master, or at least SPDK v17.07?  To do
>>> this, you will need to update your nvmf.conf to specify the NVMe devices
>>> you want to use as bdevs – see the NVMe bdev configuration docs [1] and
>>> NVMe-oF target Getting Started guide [2] for more details.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> -- Daniel
>>>
>>>
>>>
>>> [1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
>>>
>>> [2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started
>>>
>>>
>>>
>>> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
>>> Prakash
>>> *Sent:* Tuesday, October 17, 2017 12:05 PM
>>> *To:* spdk(a)lists.01.org
>>> *Subject:* [SPDK] NVMf Target configuration issue
>>>
>>>
>>> Hi all,
>>>
>>> I see issue with NVMf Target configuration. I also see the thread  *[SPDK]
>>> NVMf target configuration issue **Thu Sep 7 01:29:33 PDT 2017 *which is
>>> for the similar issue what I am seeing. I tried the suggestion from that
>>> thread, but it was not helpful for my problem.
>>>
>>>
>>>
>>> * Error Message:*
>>>
>>> * # ./nvmf_tgt -t all -c nvmf.conf*
>>>
>>> Starting DPDK 17.02.0 initialization...
>>>
>>> [ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
>>>
>>> EAL: Detected 12 lcore(s)
>>>
>>> EAL: No free hugepages reported in hugepages-1048576kB
>>>
>>> EAL: Probing VFIO support...
>>>
>>> Occupied cpu core mask is 0xfff
>>>
>>> Occupied cpu socket mask is 0x1
>>>
>>> EAL: PCI device 0000:00:04.0 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f20 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
>>>
>>> EAL: PCI device 0000:00:04.1 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f21 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
>>>
>>> EAL: PCI device 0000:00:04.2 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f22 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
>>>
>>> EAL: PCI device 0000:00:04.3 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f23 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
>>>
>>> EAL: PCI device 0000:00:04.4 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f24 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
>>>
>>> EAL: PCI device 0000:00:04.5 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f25 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
>>>
>>> EAL: PCI device 0000:00:04.6 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f26 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
>>>
>>> EAL: PCI device 0000:00:04.7 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:2f27 spdk_ioat
>>>
>>>  Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
>>>
>>> Ioat Copy Engine Offload Enabled
>>>
>>> EAL: PCI device 0000:01:00.0 on NUMA socket 0
>>>
>>> EAL:   probe driver: 8086:953 spdk_nvme
>>>
>>> Probing device 0000:01:00.0
>>>
>>> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no
>>> timeout)
>>>
>>> [nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
>>>
>>> [nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
>>>
>>> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable
>>> and wait for CSTS.RDY = 0 (timeout 20000 ms)
>>>
>>> [nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY
>>> = 0 - enabling controller
>>>
>>> [nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
>>>
>>> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable
>>> and wait for CSTS.RDY = 1 (timeout 20000 ms)
>>>
>>> [nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY
>>> = 1 - controller is ready
>>>
>>> [nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size
>>> 2072576
>>>
>>> [nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
>>>
>>> [nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller
>>> KAS is 0 - not enabling Keep Alive
>>>
>>> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready
>>> (no timeout)
>>>
>>> [debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
>>>
>>> Total cores available: 12
>>>
>>> Reactor started on core 1 on socket 0
>>>
>>> Reactor started on core 2 on socket 0
>>>
>>> Reactor started on core 3 on socket 0
>>>
>>> Reactor started on core 4 on socket 0
>>>
>>> Reactor started on core 5 on socket 0
>>>
>>> Reactor started on core 6 on socket 0
>>>
>>> Reactor started on core 7 on socket 0
>>>
>>> Reactor started on core 8 on socket 0
>>>
>>> Reactor started on core 9 on socket 0
>>>
>>> Reactor started on core 10 on socket 0
>>>
>>> Reactor started on core 0 on socket 0
>>>
>>> [nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
>>>
>>> [nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
>>>
>>> [nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
>>>
>>> [nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
>>>
>>> *** RDMA Transport Init ***
>>>
>>> Reactor started on core 11 on socket 0
>>>
>>> *allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on
>>> socket 0*
>>>
>>> *allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0*
>>>
>>> *[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with
>>> context 0x1720db0, created completion channel 0x1720ad0*
>>>
>>> *conf.c: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem
>>> nqn.2016-06.io.spdk:cnode1: missing NVMe directive*
>>>
>>> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
>>> 0x1720b20*
>>>
>>> *nvmf_tgt.c: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf()
>>> failed*
>>>
>>> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
>>> 0x17205b0*
>>>
>>> *[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *# ./setup.sh*
>>>
>>> 0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
>>>
>>> 0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
>>>
>>> 0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic
>>>
>>>
>>>
>>> *# lspci | grep -i vola**
>>>
>>> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
>>> Center SSD (rev 01)
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *# lspci -v -s 0000:01:00.0*
>>>
>>> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
>>> Center SSD (rev 01) (prog-if 02 [NVM Express])
>>>
>>>         Subsystem: Intel Corporation DC P3700 SSD
>>>
>>>         Physical Slot: 1
>>>
>>>         Flags: fast devsel, IRQ 24, NUMA node 0
>>>
>>>         Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
>>>
>>>         Expansion ROM at fb400000 [disabled] [size=64K]
>>>
>>>         Capabilities: [40] Power Management version 3
>>>
>>>         Capabilities: [50] MSI-X: Enable- Count=32 Masked-
>>>
>>>         Capabilities: [60] Express Endpoint, MSI 00
>>>
>>>         Capabilities: [100] Advanced Error Reporting
>>>
>>>         Capabilities: [150] Virtual Channel
>>>
>>>         Capabilities: [180] Power Budgeting <?>
>>>
>>>         Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
>>>
>>>         Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
>>>
>>>         Capabilities: [2a0] #19
>>>
>>>         Kernel driver in use: uio_pci_generic
>>>
>>>         Kernel modules: nvme
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> SPDK mailing list
>>> SPDK(a)lists.01.org
>>> https://lists.01.org/mailman/listinfo/spdk
>>>
>>>
>>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 28902 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-18  0:51 Liu, Changpeng
  0 siblings, 0 replies; 18+ messages in thread
From: Liu, Changpeng @ 2017-10-18  0:51 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8963 bytes --]

Hi Gyan,

numa.h is one of system header files, maybe you can try to install numactl-devel or libnuma-devel in your system.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Gyan Prakash
Sent: Wednesday, October 18, 2017 8:37 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] NVMf Target configuration issue

Hi all,
I got the latest master of SPDK and getting following error:

/root/test-spdk/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18: fatal error: numa.h: No such file or directory
 #include <numa.h>
                  ^
compilation terminated.
  CC eal_timer.o
  CC eal_interrupts.o
  CC eal_alarm.o
make[8]: *** [eal_memory.o] Error 1
make[8]: *** Waiting for unfinished jobs....
make[7]: *** [eal] Error 2
make[6]: *** [linuxapp] Error 2
make[5]: *** [librte_eal] Error 2
make[4]: *** [lib] Error 2
make[3]: *** [all] Error 2
make[3]: Leaving directory `/root/test-spdk/spdk/dpdk'
make[2]: *** [all] Error 2
make[1]: *** [all] Error 2
make: *** [dpdkbuild] Error 2

I am using the in-built DPDK and I do not see the file "numa.h" anywhere. Am I supposed to get something installed on my system to get the numa.h?
I used "make CONFIG_RDMA=y CONFIG_DEBUG=y" as my build command.

Thanks & regards,
Gyan


On Tue, Oct 17, 2017 at 1:03 PM, Gyan Prakash <gyapra2016(a)gmail.com<mailto:gyapra2016(a)gmail.com>> wrote:
Hi Daniel,

Thanks for the reply. You are correct, I am using SPDK v17.03. I will test with SPDK v17.07.

I will also check the referenced documents for the configuration.

Thanks,
Gyan

On Tue, Oct 17, 2017 at 12:54 PM, Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>> wrote:
Hi Gyan,

It looks like you are using an older version of SPDK before the NVMe-oF target was changed to always use the bdev abstraction layer.

Can you retest with the latest master, or at least SPDK v17.07?  To do this, you will need to update your nvmf.conf to specify the NVMe devices you want to use as bdevs – see the NVMe bdev configuration docs [1] and NVMe-oF target Getting Started guide [2] for more details.

Thanks,
-- Daniel

[1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
[2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Gyan Prakash
Sent: Tuesday, October 17, 2017 12:05 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] NVMf Target configuration issue

Hi all,
I see issue with NVMf Target configuration. I also see the thread  [SPDK] NVMf target configuration issue Thu Sep 7 01:29:33 PDT 2017 which is for the similar issue what I am seeing. I tried the suggestion from that thread, but it was not helpful for my problem.

 Error Message:
 # ./nvmf_tgt -t all -c nvmf.conf
Starting DPDK 17.02.0 initialization...
[ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Occupied cpu core mask is 0xfff
Occupied cpu socket mask is 0x1
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL:   probe driver: 8086:2f20 spdk_ioat
 Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL:   probe driver: 8086:2f21 spdk_ioat
 Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL:   probe driver: 8086:2f22 spdk_ioat
 Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL:   probe driver: 8086:2f23 spdk_ioat
 Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL:   probe driver: 8086:2f24 spdk_ioat
 Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL:   probe driver: 8086:2f25 spdk_ioat
 Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL:   probe driver: 8086:2f26 spdk_ioat
 Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL:   probe driver: 8086:2f27 spdk_ioat
 Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Probing device 0000:01:00.0
[nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no timeout)
[nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
[nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
[nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable and wait for CSTS.RDY = 0 (timeout 20000 ms)
[nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY = 0 - enabling controller
[nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
[nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable and wait for CSTS.RDY = 1 (timeout 20000 ms)
[nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY = 1 - controller is ready
[nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size 2072576
[nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
[nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller KAS is 0 - not enabling Keep Alive
[nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready (no timeout)
[debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
Total cores available: 12
Reactor started on core 1 on socket 0
Reactor started on core 2 on socket 0
Reactor started on core 3 on socket 0
Reactor started on core 4 on socket 0
Reactor started on core 5 on socket 0
Reactor started on core 6 on socket 0
Reactor started on core 7 on socket 0
Reactor started on core 8 on socket 0
Reactor started on core 9 on socket 0
Reactor started on core 10 on socket 0
Reactor started on core 0 on socket 0
[nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
[nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
[nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
[nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
*** RDMA Transport Init ***
Reactor started on core 11 on socket 0
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0
[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with context 0x1720db0, created completion channel 0x1720ad0
conf.c: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem nqn.2016-06.io.spdk:cnode1: missing NVMe directive
[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is 0x1720b20
nvmf_tgt.c: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf() failed
[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is 0x17205b0
[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete





# ./setup.sh
0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic

# lspci | grep -i vola*
01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)



# lspci -v -s 0000:01:00.0
01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express])
        Subsystem: Intel Corporation DC P3700 SSD
        Physical Slot: 1
        Flags: fast devsel, IRQ 24, NUMA node 0
        Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at fb400000 [disabled] [size=64K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI-X: Enable- Count=32 Masked-
        Capabilities: [60] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Virtual Channel
        Capabilities: [180] Power Budgeting <?>
        Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
        Capabilities: [2a0] #19
        Kernel driver in use: uio_pci_generic
        Kernel modules: nvme



_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 90729 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-18  0:41 Harris, James R
  0 siblings, 0 replies; 18+ messages in thread
From: Harris, James R @ 2017-10-18  0:41 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9131 bytes --]

Hi Gyan,

libnuma/numactl is a recently added package dependency.  You can run scripts/pkgdep.sh or just install it explicitly – numactl-devel on RHEL/CentOS or  libnuma-dev on Ubuntu/Debian.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Gyan Prakash <gyapra2016(a)gmail.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Tuesday, October 17, 2017 at 5:36 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] NVMf Target configuration issue

Hi all,
I got the latest master of SPDK and getting following error:

/root/test-spdk/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18: fatal error: numa.h: No such file or directory
 #include <numa.h>
                  ^
compilation terminated.
  CC eal_timer.o
  CC eal_interrupts.o
  CC eal_alarm.o
make[8]: *** [eal_memory.o] Error 1
make[8]: *** Waiting for unfinished jobs....
make[7]: *** [eal] Error 2
make[6]: *** [linuxapp] Error 2
make[5]: *** [librte_eal] Error 2
make[4]: *** [lib] Error 2
make[3]: *** [all] Error 2
make[3]: Leaving directory `/root/test-spdk/spdk/dpdk'
make[2]: *** [all] Error 2
make[1]: *** [all] Error 2
make: *** [dpdkbuild] Error 2

I am using the in-built DPDK and I do not see the file "numa.h" anywhere. Am I supposed to get something installed on my system to get the numa.h?
I used "make CONFIG_RDMA=y CONFIG_DEBUG=y" as my build command.

Thanks & regards,
Gyan


On Tue, Oct 17, 2017 at 1:03 PM, Gyan Prakash <gyapra2016(a)gmail.com<mailto:gyapra2016(a)gmail.com>> wrote:
Hi Daniel,

Thanks for the reply. You are correct, I am using SPDK v17.03. I will test with SPDK v17.07.

I will also check the referenced documents for the configuration.

Thanks,
Gyan

On Tue, Oct 17, 2017 at 12:54 PM, Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>> wrote:
Hi Gyan,

It looks like you are using an older version of SPDK before the NVMe-oF target was changed to always use the bdev abstraction layer.

Can you retest with the latest master, or at least SPDK v17.07?  To do this, you will need to update your nvmf.conf to specify the NVMe devices you want to use as bdevs – see the NVMe bdev configuration docs [1] and NVMe-oF target Getting Started guide [2] for more details.

Thanks,
-- Daniel

[1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
[2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Gyan Prakash
Sent: Tuesday, October 17, 2017 12:05 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] NVMf Target configuration issue

Hi all,
I see issue with NVMf Target configuration. I also see the thread  [SPDK] NVMf target configuration issue Thu Sep 7 01:29:33 PDT 2017 which is for the similar issue what I am seeing. I tried the suggestion from that thread, but it was not helpful for my problem.

 Error Message:
 # ./nvmf_tgt -t all -c nvmf.conf
Starting DPDK 17.02.0 initialization...
[ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Occupied cpu core mask is 0xfff
Occupied cpu socket mask is 0x1
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL:   probe driver: 8086:2f20 spdk_ioat
 Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL:   probe driver: 8086:2f21 spdk_ioat
 Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL:   probe driver: 8086:2f22 spdk_ioat
 Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL:   probe driver: 8086:2f23 spdk_ioat
 Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL:   probe driver: 8086:2f24 spdk_ioat
 Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL:   probe driver: 8086:2f25 spdk_ioat
 Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL:   probe driver: 8086:2f26 spdk_ioat
 Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL:   probe driver: 8086:2f27 spdk_ioat
 Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Probing device 0000:01:00.0
[nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no timeout)
[nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
[nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
[nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable and wait for CSTS.RDY = 0 (timeout 20000 ms)
[nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY = 0 - enabling controller
[nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
[nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable and wait for CSTS.RDY = 1 (timeout 20000 ms)
[nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY = 1 - controller is ready
[nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size 2072576
[nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
[nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller KAS is 0 - not enabling Keep Alive
[nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready (no timeout)
[debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
Total cores available: 12
Reactor started on core 1 on socket 0
Reactor started on core 2 on socket 0
Reactor started on core 3 on socket 0
Reactor started on core 4 on socket 0
Reactor started on core 5 on socket 0
Reactor started on core 6 on socket 0
Reactor started on core 7 on socket 0
Reactor started on core 8 on socket 0
Reactor started on core 9 on socket 0
Reactor started on core 10 on socket 0
Reactor started on core 0 on socket 0
[nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
[nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
[nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
[nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
*** RDMA Transport Init ***
Reactor started on core 11 on socket 0
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0
[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with context 0x1720db0, created completion channel 0x1720ad0
conf.c: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem nqn.2016-06.io.spdk:cnode1: missing NVMe directive
[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is 0x1720b20
nvmf_tgt.c: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf() failed
[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is 0x17205b0
[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete





# ./setup.sh
0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic

# lspci | grep -i vola*
01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)



# lspci -v -s 0000:01:00.0
01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express])
        Subsystem: Intel Corporation DC P3700 SSD
        Physical Slot: 1
        Flags: fast devsel, IRQ 24, NUMA node 0
        Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at fb400000 [disabled] [size=64K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI-X: Enable- Count=32 Masked-
        Capabilities: [60] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Virtual Channel
        Capabilities: [180] Power Budgeting <?>
        Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
        Capabilities: [2a0] #19
        Kernel driver in use: uio_pci_generic
        Kernel modules: nvme



_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 39779 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-18  0:36 Gyan Prakash
  0 siblings, 0 replies; 18+ messages in thread
From: Gyan Prakash @ 2017-10-18  0:36 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9554 bytes --]

Hi all,

I got the latest master of SPDK and getting following error:

/root/test-spdk/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18:
fatal error: numa.h: No such file or directory
 #include <numa.h>
                  ^
compilation terminated.
  CC eal_timer.o
  CC eal_interrupts.o
  CC eal_alarm.o
make[8]: *** [eal_memory.o] Error 1
make[8]: *** Waiting for unfinished jobs....
make[7]: *** [eal] Error 2
make[6]: *** [linuxapp] Error 2
make[5]: *** [librte_eal] Error 2
make[4]: *** [lib] Error 2
make[3]: *** [all] Error 2
make[3]: Leaving directory `/root/test-spdk/spdk/dpdk'
make[2]: *** [all] Error 2
make[1]: *** [all] Error 2
make: *** [dpdkbuild] Error 2


I am using the in-built DPDK and I do not see the file "numa.h" anywhere.
Am I supposed to get something installed on my system to get the numa.h?

I used "make CONFIG_RDMA=y CONFIG_DEBUG=y" as my build command.

Thanks & regards,
Gyan


On Tue, Oct 17, 2017 at 1:03 PM, Gyan Prakash <gyapra2016(a)gmail.com> wrote:

> Hi Daniel,
>
> Thanks for the reply. You are correct, I am using SPDK v17.03. I will test
> with SPDK v17.07.
>
> I will also check the referenced documents for the configuration.
>
> Thanks,
> Gyan
>
> On Tue, Oct 17, 2017 at 12:54 PM, Verkamp, Daniel <
> daniel.verkamp(a)intel.com> wrote:
>
>> Hi Gyan,
>>
>>
>>
>> It looks like you are using an older version of SPDK before the NVMe-oF
>> target was changed to always use the bdev abstraction layer.
>>
>>
>>
>> Can you retest with the latest master, or at least SPDK v17.07?  To do
>> this, you will need to update your nvmf.conf to specify the NVMe devices
>> you want to use as bdevs – see the NVMe bdev configuration docs [1] and
>> NVMe-oF target Getting Started guide [2] for more details.
>>
>>
>>
>> Thanks,
>>
>> -- Daniel
>>
>>
>>
>> [1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
>>
>> [2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started
>>
>>
>>
>> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
>> Prakash
>> *Sent:* Tuesday, October 17, 2017 12:05 PM
>> *To:* spdk(a)lists.01.org
>> *Subject:* [SPDK] NVMf Target configuration issue
>>
>>
>> Hi all,
>>
>> I see issue with NVMf Target configuration. I also see the thread  *[SPDK]
>> NVMf target configuration issue **Thu Sep 7 01:29:33 PDT 2017 *which is
>> for the similar issue what I am seeing. I tried the suggestion from that
>> thread, but it was not helpful for my problem.
>>
>>
>>
>> * Error Message:*
>>
>> * # ./nvmf_tgt -t all -c nvmf.conf*
>>
>> Starting DPDK 17.02.0 initialization...
>>
>> [ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
>>
>> EAL: Detected 12 lcore(s)
>>
>> EAL: No free hugepages reported in hugepages-1048576kB
>>
>> EAL: Probing VFIO support...
>>
>> Occupied cpu core mask is 0xfff
>>
>> Occupied cpu socket mask is 0x1
>>
>> EAL: PCI device 0000:00:04.0 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f20 spdk_ioat
>>
>>  Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
>>
>> EAL: PCI device 0000:00:04.1 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f21 spdk_ioat
>>
>>  Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
>>
>> EAL: PCI device 0000:00:04.2 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f22 spdk_ioat
>>
>>  Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
>>
>> EAL: PCI device 0000:00:04.3 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f23 spdk_ioat
>>
>>  Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
>>
>> EAL: PCI device 0000:00:04.4 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f24 spdk_ioat
>>
>>  Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
>>
>> EAL: PCI device 0000:00:04.5 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f25 spdk_ioat
>>
>>  Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
>>
>> EAL: PCI device 0000:00:04.6 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f26 spdk_ioat
>>
>>  Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
>>
>> EAL: PCI device 0000:00:04.7 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:2f27 spdk_ioat
>>
>>  Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
>>
>> Ioat Copy Engine Offload Enabled
>>
>> EAL: PCI device 0000:01:00.0 on NUMA socket 0
>>
>> EAL:   probe driver: 8086:953 spdk_nvme
>>
>> Probing device 0000:01:00.0
>>
>> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no
>> timeout)
>>
>> [nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
>>
>> [nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
>>
>> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable
>> and wait for CSTS.RDY = 0 (timeout 20000 ms)
>>
>> [nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY
>> = 0 - enabling controller
>>
>> [nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
>>
>> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable
>> and wait for CSTS.RDY = 1 (timeout 20000 ms)
>>
>> [nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY
>> = 1 - controller is ready
>>
>> [nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size
>> 2072576
>>
>> [nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
>>
>> [nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller
>> KAS is 0 - not enabling Keep Alive
>>
>> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready (no
>> timeout)
>>
>> [debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
>>
>> Total cores available: 12
>>
>> Reactor started on core 1 on socket 0
>>
>> Reactor started on core 2 on socket 0
>>
>> Reactor started on core 3 on socket 0
>>
>> Reactor started on core 4 on socket 0
>>
>> Reactor started on core 5 on socket 0
>>
>> Reactor started on core 6 on socket 0
>>
>> Reactor started on core 7 on socket 0
>>
>> Reactor started on core 8 on socket 0
>>
>> Reactor started on core 9 on socket 0
>>
>> Reactor started on core 10 on socket 0
>>
>> Reactor started on core 0 on socket 0
>>
>> [nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
>>
>> [nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
>>
>> [nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
>>
>> [nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
>>
>> *** RDMA Transport Init ***
>>
>> Reactor started on core 11 on socket 0
>>
>> *allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on
>> socket 0*
>>
>> *allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0*
>>
>> *[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with
>> context 0x1720db0, created completion channel 0x1720ad0*
>>
>> *conf.c: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem
>> nqn.2016-06.io.spdk:cnode1: missing NVMe directive*
>>
>> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
>> 0x1720b20*
>>
>> *nvmf_tgt.c: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf()
>> failed*
>>
>> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
>> 0x17205b0*
>>
>> *[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *# ./setup.sh*
>>
>> 0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
>>
>> 0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
>>
>> 0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic
>>
>>
>>
>> *# lspci | grep -i vola**
>>
>> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
>> Center SSD (rev 01)
>>
>>
>>
>>
>>
>>
>>
>> *# lspci -v -s 0000:01:00.0*
>>
>> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data
>> Center SSD (rev 01) (prog-if 02 [NVM Express])
>>
>>         Subsystem: Intel Corporation DC P3700 SSD
>>
>>         Physical Slot: 1
>>
>>         Flags: fast devsel, IRQ 24, NUMA node 0
>>
>>         Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
>>
>>         Expansion ROM at fb400000 [disabled] [size=64K]
>>
>>         Capabilities: [40] Power Management version 3
>>
>>         Capabilities: [50] MSI-X: Enable- Count=32 Masked-
>>
>>         Capabilities: [60] Express Endpoint, MSI 00
>>
>>         Capabilities: [100] Advanced Error Reporting
>>
>>         Capabilities: [150] Virtual Channel
>>
>>         Capabilities: [180] Power Budgeting <?>
>>
>>         Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
>>
>>         Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
>>
>>         Capabilities: [2a0] #19
>>
>>         Kernel driver in use: uio_pci_generic
>>
>>         Kernel modules: nvme
>>
>>
>>
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 27598 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-17 20:03 Gyan Prakash
  0 siblings, 0 replies; 18+ messages in thread
From: Gyan Prakash @ 2017-10-17 20:03 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8196 bytes --]

Hi Daniel,

Thanks for the reply. You are correct, I am using SPDK v17.03. I will test
with SPDK v17.07.

I will also check the referenced documents for the configuration.

Thanks,
Gyan

On Tue, Oct 17, 2017 at 12:54 PM, Verkamp, Daniel <daniel.verkamp(a)intel.com>
wrote:

> Hi Gyan,
>
>
>
> It looks like you are using an older version of SPDK before the NVMe-oF
> target was changed to always use the bdev abstraction layer.
>
>
>
> Can you retest with the latest master, or at least SPDK v17.07?  To do
> this, you will need to update your nvmf.conf to specify the NVMe devices
> you want to use as bdevs – see the NVMe bdev configuration docs [1] and
> NVMe-oF target Getting Started guide [2] for more details.
>
>
>
> Thanks,
>
> -- Daniel
>
>
>
> [1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
>
> [2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Gyan
> Prakash
> *Sent:* Tuesday, October 17, 2017 12:05 PM
> *To:* spdk(a)lists.01.org
> *Subject:* [SPDK] NVMf Target configuration issue
>
>
> Hi all,
>
> I see issue with NVMf Target configuration. I also see the thread  *[SPDK]
> NVMf target configuration issue **Thu Sep 7 01:29:33 PDT 2017 *which is
> for the similar issue what I am seeing. I tried the suggestion from that
> thread, but it was not helpful for my problem.
>
>
>
> * Error Message:*
>
> * # ./nvmf_tgt -t all -c nvmf.conf*
>
> Starting DPDK 17.02.0 initialization...
>
> [ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
>
> EAL: Detected 12 lcore(s)
>
> EAL: No free hugepages reported in hugepages-1048576kB
>
> EAL: Probing VFIO support...
>
> Occupied cpu core mask is 0xfff
>
> Occupied cpu socket mask is 0x1
>
> EAL: PCI device 0000:00:04.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f20 spdk_ioat
>
>  Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
>
> EAL: PCI device 0000:00:04.1 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f21 spdk_ioat
>
>  Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
>
> EAL: PCI device 0000:00:04.2 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f22 spdk_ioat
>
>  Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
>
> EAL: PCI device 0000:00:04.3 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f23 spdk_ioat
>
>  Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
>
> EAL: PCI device 0000:00:04.4 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f24 spdk_ioat
>
>  Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
>
> EAL: PCI device 0000:00:04.5 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f25 spdk_ioat
>
>  Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
>
> EAL: PCI device 0000:00:04.6 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f26 spdk_ioat
>
>  Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
>
> EAL: PCI device 0000:00:04.7 on NUMA socket 0
>
> EAL:   probe driver: 8086:2f27 spdk_ioat
>
>  Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
>
> Ioat Copy Engine Offload Enabled
>
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:953 spdk_nvme
>
> Probing device 0000:01:00.0
>
> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no
> timeout)
>
> [nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
>
> [nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
>
> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable
> and wait for CSTS.RDY = 0 (timeout 20000 ms)
>
> [nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY =
> 0 - enabling controller
>
> [nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
>
> [nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable and
> wait for CSTS.RDY = 1 (timeout 20000 ms)
>
> [nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY =
> 1 - controller is ready
>
> [nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size
> 2072576
>
> [nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
>
> [nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller
> KAS is 0 - not enabling Keep Alive
>
> [nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready (no
> timeout)
>
> [debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
>
> Total cores available: 12
>
> Reactor started on core 1 on socket 0
>
> Reactor started on core 2 on socket 0
>
> Reactor started on core 3 on socket 0
>
> Reactor started on core 4 on socket 0
>
> Reactor started on core 5 on socket 0
>
> Reactor started on core 6 on socket 0
>
> Reactor started on core 7 on socket 0
>
> Reactor started on core 8 on socket 0
>
> Reactor started on core 9 on socket 0
>
> Reactor started on core 10 on socket 0
>
> Reactor started on core 0 on socket 0
>
> [nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
>
> [nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
>
> [nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
>
> [nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
>
> *** RDMA Transport Init ***
>
> Reactor started on core 11 on socket 0
>
> *allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on
> socket 0*
>
> *allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0*
>
> *[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with
> context 0x1720db0, created completion channel 0x1720ad0*
>
> *conf.c: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem
> nqn.2016-06.io.spdk:cnode1: missing NVMe directive*
>
> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
> 0x1720b20*
>
> *nvmf_tgt.c: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf()
> failed*
>
> *[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is
> 0x17205b0*
>
> *[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete*
>
>
>
>
>
>
>
>
>
>
>
> *# ./setup.sh*
>
> 0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
>
> 0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
>
> 0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
>
> 0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
>
> 0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
>
> 0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
>
> 0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
>
> 0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
>
> 0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic
>
>
>
> *# lspci | grep -i vola**
>
> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center
> SSD (rev 01)
>
>
>
>
>
>
>
> *# lspci -v -s 0000:01:00.0*
>
> 01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center
> SSD (rev 01) (prog-if 02 [NVM Express])
>
>         Subsystem: Intel Corporation DC P3700 SSD
>
>         Physical Slot: 1
>
>         Flags: fast devsel, IRQ 24, NUMA node 0
>
>         Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
>
>         Expansion ROM at fb400000 [disabled] [size=64K]
>
>         Capabilities: [40] Power Management version 3
>
>         Capabilities: [50] MSI-X: Enable- Count=32 Masked-
>
>         Capabilities: [60] Express Endpoint, MSI 00
>
>         Capabilities: [100] Advanced Error Reporting
>
>         Capabilities: [150] Virtual Channel
>
>         Capabilities: [180] Power Budgeting <?>
>
>         Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
>
>         Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
>
>         Capabilities: [2a0] #19
>
>         Kernel driver in use: uio_pci_generic
>
>         Kernel modules: nvme
>
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 25808 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf Target configuration issue
@ 2017-10-17 19:54 Verkamp, Daniel
  0 siblings, 0 replies; 18+ messages in thread
From: Verkamp, Daniel @ 2017-10-17 19:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6995 bytes --]

Hi Gyan,

It looks like you are using an older version of SPDK before the NVMe-oF target was changed to always use the bdev abstraction layer.

Can you retest with the latest master, or at least SPDK v17.07?  To do this, you will need to update your nvmf.conf to specify the NVMe devices you want to use as bdevs – see the NVMe bdev configuration docs [1] and NVMe-oF target Getting Started guide [2] for more details.

Thanks,
-- Daniel

[1]: http://www.spdk.io/doc/bdev.html#bdev_config_nvme
[2]: http://www.spdk.io/doc/nvmf.html#nvmf_getting_started

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Gyan Prakash
Sent: Tuesday, October 17, 2017 12:05 PM
To: spdk(a)lists.01.org
Subject: [SPDK] NVMf Target configuration issue

Hi all,
I see issue with NVMf Target configuration. I also see the thread  [SPDK] NVMf target configuration issue Thu Sep 7 01:29:33 PDT 2017 which is for the similar issue what I am seeing. I tried the suggestion from that thread, but it was not helpful for my problem.

 Error Message:
 # ./nvmf_tgt -t all -c nvmf.conf
Starting DPDK 17.02.0 initialization...
[ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Occupied cpu core mask is 0xfff
Occupied cpu socket mask is 0x1
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL:   probe driver: 8086:2f20 spdk_ioat
 Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL:   probe driver: 8086:2f21 spdk_ioat
 Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL:   probe driver: 8086:2f22 spdk_ioat
 Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL:   probe driver: 8086:2f23 spdk_ioat
 Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL:   probe driver: 8086:2f24 spdk_ioat
 Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL:   probe driver: 8086:2f25 spdk_ioat
 Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL:   probe driver: 8086:2f26 spdk_ioat
 Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL:   probe driver: 8086:2f27 spdk_ioat
 Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Probing device 0000:01:00.0
[nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no timeout)
[nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
[nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
[nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable and wait for CSTS.RDY = 0 (timeout 20000 ms)
[nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY = 0 - enabling controller
[nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
[nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable and wait for CSTS.RDY = 1 (timeout 20000 ms)
[nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY = 1 - controller is ready
[nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size 2072576
[nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
[nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller KAS is 0 - not enabling Keep Alive
[nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready (no timeout)
[debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
Total cores available: 12
Reactor started on core 1 on socket 0
Reactor started on core 2 on socket 0
Reactor started on core 3 on socket 0
Reactor started on core 4 on socket 0
Reactor started on core 5 on socket 0
Reactor started on core 6 on socket 0
Reactor started on core 7 on socket 0
Reactor started on core 8 on socket 0
Reactor started on core 9 on socket 0
Reactor started on core 10 on socket 0
Reactor started on core 0 on socket 0
[nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
[nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
[nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
[nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
*** RDMA Transport Init ***
Reactor started on core 11 on socket 0
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0
[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with context 0x1720db0, created completion channel 0x1720ad0
conf.c: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem nqn.2016-06.io.spdk:cnode1: missing NVMe directive
[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is 0x1720b20
nvmf_tgt.c: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf() failed
[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is 0x17205b0
[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete





# ./setup.sh
0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic

# lspci | grep -i vola*
01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)



# lspci -v -s 0000:01:00.0
01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express])
        Subsystem: Intel Corporation DC P3700 SSD
        Physical Slot: 1
        Flags: fast devsel, IRQ 24, NUMA node 0
        Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at fb400000 [disabled] [size=64K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI-X: Enable- Count=32 Masked-
        Capabilities: [60] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Virtual Channel
        Capabilities: [180] Power Budgeting <?>
        Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
        Capabilities: [2a0] #19
        Kernel driver in use: uio_pci_generic
        Kernel modules: nvme



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 27333 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [SPDK] NVMf Target configuration issue
@ 2017-10-17 19:04 Gyan Prakash
  0 siblings, 0 replies; 18+ messages in thread
From: Gyan Prakash @ 2017-10-17 19:04 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6275 bytes --]

Hi all,
I see issue with NVMf Target configuration. I also see the thread  [SPDK]
NVMf target configuration issue *Thu Sep 7 01:29:33 PDT 2017 *which is for
the similar issue what I am seeing. I tried the suggestion from that
thread, but it was not helpful for my problem.

* Error Message:*
* # ./nvmf_tgt -t all -c nvmf.conf*
Starting DPDK 17.02.0 initialization...
[ DPDK EAL parameters: nvmf -c fff --file-prefix=spdk_pid3911 ]
EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Occupied cpu core mask is 0xfff
Occupied cpu socket mask is 0x1
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL:   probe driver: 8086:2f20 spdk_ioat
 Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2f20
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL:   probe driver: 8086:2f21 spdk_ioat
 Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2f21
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL:   probe driver: 8086:2f22 spdk_ioat
 Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2f22
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL:   probe driver: 8086:2f23 spdk_ioat
 Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2f23
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL:   probe driver: 8086:2f24 spdk_ioat
 Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2f24
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL:   probe driver: 8086:2f25 spdk_ioat
 Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2f25
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL:   probe driver: 8086:2f26 spdk_ioat
 Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2f26
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL:   probe driver: 8086:2f27 spdk_ioat
 Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2f27
Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Probing device 0000:01:00.0
[nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to init (no
timeout)
[nvme] nvme_ctrlr.c:1135:nvme_ctrlr_process_init: CC.EN = 1
[nvme] nvme_ctrlr.c:1149:nvme_ctrlr_process_init: Setting CC.EN = 0
[nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to disable and
wait for CSTS.RDY = 0 (timeout 20000 ms)
[nvme] nvme_ctrlr.c:1206:nvme_ctrlr_process_init: CC.EN = 0 && CSTS.RDY = 0
- enabling controller
[nvme] nvme_ctrlr.c:1208:nvme_ctrlr_process_init: Setting CC.EN = 1
[nvme] nvme_ctrlr.c: 506:nvme_ctrlr_set_state: setting state to enable and
wait for CSTS.RDY = 1 (timeout 20000 ms)
[nvme] nvme_ctrlr.c:1217:nvme_ctrlr_process_init: CC.EN = 1 && CSTS.RDY = 1
- controller is ready
[nvme] nvme_ctrlr.c: 594:nvme_ctrlr_identify: transport max_xfer_size
2072576
[nvme] nvme_ctrlr.c: 598:nvme_ctrlr_identify: MDTS max_xfer_size 131072
[nvme] nvme_ctrlr.c: 673:nvme_ctrlr_set_keep_alive_timeout: Controller KAS
is 0 - not enabling Keep Alive
[nvme] nvme_ctrlr.c: 502:nvme_ctrlr_set_state: setting state to ready (no
timeout)
[debug] bdev.c: 932:spdk_bdev_register: Inserting bdev Nvme0n1 into list
Total cores available: 12
Reactor started on core 1 on socket 0
Reactor started on core 2 on socket 0
Reactor started on core 3 on socket 0
Reactor started on core 4 on socket 0
Reactor started on core 5 on socket 0
Reactor started on core 6 on socket 0
Reactor started on core 7 on socket 0
Reactor started on core 8 on socket 0
Reactor started on core 9 on socket 0
Reactor started on core 10 on socket 0
Reactor started on core 0 on socket 0
[nvmf] nvmf.c:  68:spdk_nvmf_tgt_init: Max Queues Per Session: 4
[nvmf] nvmf.c:  69:spdk_nvmf_tgt_init: Max Queue Depth: 128
[nvmf] nvmf.c:  70:spdk_nvmf_tgt_init: Max In Capsule Data: 4096 bytes
[nvmf] nvmf.c:  71:spdk_nvmf_tgt_init: Max I/O Size: 131072 bytes
*** RDMA Transport Init ***
Reactor started on core 11 on socket 0
*allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on
socket 0*
*allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 5 on socket 0*
*[rdma] rdma.c:1177:spdk_nvmf_rdma_listen: For listen id 0x1720840 with
context 0x1720db0, created completion channel 0x1720ad0*
*conf.c: 555:spdk_nvmf_construct_subsystem: ***ERROR*** Subsystem
nqn.2016-06.io.spdk:cnode1: missing NVMe directive*
*[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is 0x1720b20*
*nvmf_tgt.c: 313:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf()
failed*
*[nvmf] subsystem.c: 231:spdk_nvmf_delete_subsystem: subsystem is 0x17205b0*
*[nvme] nvme_ctrlr.c: 406:nvme_ctrlr_shutdown: shutdown complete*





*# ./setup.sh*
0000:01:00.0 (8086 0953): nvme -> uio_pci_generic
0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic

*# lspci | grep -i vola**
01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center
SSD (rev 01)



*# lspci -v -s 0000:01:00.0*
01:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center
SSD (rev 01) (prog-if 02 [NVM Express])
        Subsystem: Intel Corporation DC P3700 SSD
        Physical Slot: 1
        Flags: fast devsel, IRQ 24, NUMA node 0
        Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at fb400000 [disabled] [size=64K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI-X: Enable- Count=32 Masked-
        Capabilities: [60] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Virtual Channel
        Capabilities: [180] Power Budgeting <?>
        Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [270] Device Serial Number 55-cd-2e-41-4d-34-1e-1b
        Capabilities: [2a0] #19
        Kernel driver in use: uio_pci_generic
        Kernel modules: nvme

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 12490 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf target configuration issue
@ 2017-09-07  8:29 Kirubakaran Kaliannan
  0 siblings, 0 replies; 18+ messages in thread
From: Kirubakaran Kaliannan @ 2017-09-07  8:29 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9496 bytes --]

Hi Harris,



Yes, I missed this part. Thanks, I could get going with NVMe now.



Thanks

-kiru



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Harris,
James R
*Sent:* Wednesday, September 06, 2017 8:59 PM
*To:* Storage Performance Development Kit
*Subject:* Re: [SPDK] NVMf target configuration issue



Hi,



Can you confirm that your NVMe device is bound to a uio or vfio driver, and
not the kernel NVMe driver?  SPDK will not try to use devices that are
still bound to the kernel nvme driver.  You can use scripts/setup.sh to
bind all NVMe devices to uio_pci_generic or vfio-pci (based on whether your
system has an IOMMU enabled).



lspci –v –s 0000:07:00.0 will show you which kernel module your NVMe device
is bound to currently.



Regards,



-Jim





*From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Kirubakaran Kaliannan
<kirubak(a)zadarastorage.com>
*Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
*Date: *Wednesday, September 6, 2017 at 12:28 AM
*To: *Storage Performance Development Kit <spdk(a)lists.01.org>
*Subject: *Re: [SPDK] NVMf target configuration issue





Hi,



I have the SPDK target working for AIO and Malloc.



I now have configured NVMe device on the same setup, but not able to start
the target, as the target is not finding the nvme0 bdev.



Do we have any other dependency for configuring Nvme devices ?



*Nvmf.conf*



[Global]



[Rpc]

  Enable No

  Listen 127.0.0.1



[Nvmf]

  MaxQueuesPerSession 4

  AcceptorPollRate 10000



[Nvme]

  TransportId "trtype:PCIe traddr:0000:07:00.0" Nvme0

  RetryCount 4

  Timeout 0

  AdminPollRate 100000

  HotplugEnable Yes



[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 0

  Listen RDMA 10.3.7.2:4420

  AllowAnyHost Yes

  SN SPDK00000000000001

  Namespace Nvme0n1 1



*# app/nvmf_tgt/nvmf_tgt -t all -c /etc/spdk/nvmf.conf *

Starting DPDK 17.05.0 initialization...

[ DPDK EAL parameters: nvmf -c 0x1 --file-prefix=spdk_pid97433 ]

EAL: Detected 20 lcore(s)

EAL: Probing VFIO support...

EAL: VFIO support initialized

Total cores available: 1

Occupied cpu socket mask is 0x1

reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0

EAL: PCI device 0000:00:04.0 on NUMA socket 0

EAL:   probe driver: 8086:6f20 spdk_ioat

EAL: PCI device 0000:00:04.1 on NUMA socket 0

EAL:   probe driver: 8086:6f21 spdk_ioat

EAL: PCI device 0000:00:04.2 on NUMA socket 0

EAL:   probe driver: 8086:6f22 spdk_ioat

EAL: PCI device 0000:00:04.3 on NUMA socket 0

EAL:   probe driver: 8086:6f23 spdk_ioat

EAL: PCI device 0000:00:04.4 on NUMA socket 0

EAL:   probe driver: 8086:6f24 spdk_ioat

EAL: PCI device 0000:00:04.5 on NUMA socket 0

EAL:   probe driver: 8086:6f25 spdk_ioat

EAL: PCI device 0000:00:04.6 on NUMA socket 0

EAL:   probe driver: 8086:6f26 spdk_ioat

EAL: PCI device 0000:00:04.7 on NUMA socket 0

EAL:   probe driver: 8086:6f27 spdk_ioat

copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine
Offload Enabled

EAL: PCI device 0000:00:04.0 on NUMA socket 0

EAL:   probe driver: 8086:6f20 spdk_ioat

EAL: PCI device 0000:00:04.1 on NUMA socket 0

EAL:   probe driver: 8086:6f21 spdk_ioat

EAL: PCI device 0000:00:04.2 on NUMA socket 0

EAL:   probe driver: 8086:6f22 spdk_ioat

EAL: PCI device 0000:00:04.3 on NUMA socket 0

EAL:   probe driver: 8086:6f23 spdk_ioat

EAL: PCI device 0000:00:04.4 on NUMA socket 0

EAL:   probe driver: 8086:6f24 spdk_ioat

EAL: PCI device 0000:00:04.5 on NUMA socket 0

EAL:   probe driver: 8086:6f25 spdk_ioat

EAL: PCI device 0000:00:04.6 on NUMA socket 0

EAL:   probe driver: 8086:6f26 spdk_ioat

EAL: PCI device 0000:00:04.7 on NUMA socket 0

EAL:   probe driver: 8086:6f27 spdk_ioat

EAL: PCI device 0000:07:00.0 on NUMA socket 0

EAL:   probe driver: 8086:953 spdk_nvme

nvmf.c:  67:spdk_nvmf_tgt_init: *INFO*: Max Queue Pairs Per Controller: 4

nvmf.c:  68:spdk_nvmf_tgt_init: *INFO*: Max Queue Depth: 128

nvmf.c:  69:spdk_nvmf_tgt_init: *INFO*: Max In Capsule Data: 4096 bytes

nvmf.c:  70:spdk_nvmf_tgt_init: *INFO*: Max I/O Size: 131072 bytes

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0

transport.c:  63:spdk_nvmf_transport_create: *ERROR*: inside
spdk_nvmf_transport_create 1

rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***

rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on
10.3.7.2 port 4420 ***

bdev.c: 214:spdk_bdev_get_by_name: *ERROR*: bdev's returned null 'Nvme0n1'

*conf.c: 492:spdk_nvmf_construct_subsystem: *ERROR*: Could not find
namespace bdev 'Nvme0n1'*

subsystem.c: 232:spdk_nvmf_delete_subsystem: *INFO*: subsystem is 0x1218ef0

nvmf_tgt.c: 276:spdk_nvmf_startup: *ERROR*: spdk_nvmf_parse_conf() failed

subsystem.c: 232:spdk_nvmf_delete_subsystem: *INFO*: subsystem is 0x1218bf0



*# lspci  |grep -i vola*

07:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center
SSD (rev 01)



Thanks

-kiru







*From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Yang, Ziye
*Sent:* Tuesday, September 05, 2017 10:41 AM
*To:* Storage Performance Development Kit
*Subject:* Re: [SPDK] NVMf target configuration issue



And the other choice is to remove the Host related line, which means any
host can connect.



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>] *On
Behalf Of *Yang, Ziye
*Sent:* Tuesday, September 5, 2017 1:04 PM
*To:* Storage Performance Development Kit <spdk(a)lists.01.org>
*Subject:* Re: [SPDK] NVMf target configuration issue



Hi Kiru,



According to the error message, I suggest that you add the Host info in the
following section.





[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 0

  Listen RDMA 10.3.7.2:4420

  Host nqn.2016-06.io.spdk:init

*                 Host
nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14*

  SN SPDK00000000000001

  Namespace AIO0





Thanks.



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>] *On
Behalf Of *Kirubakaran Kaliannan
*Sent:* Tuesday, September 5, 2017 12:52 PM
*To:* Storage Performance Development Kit <spdk(a)lists.01.org>
*Subject:* [SPDK] NVMf target configuration issue







Hi All,



I am trying to get the nvmf configuration going with SPDK.



I have the following configuration,



*# ofed 2.4 on both target and initiator*

*# SPDK + 3.18 kernel on target *

*# 4.9 kernel on initiator*

*# mellanox connect3X-Pro*



*/etc/spdk/nvmf.conf*



[Global]

[Rpc]

  Enable No

  Listen 127.0.0.1

[AIO]

  AIO /dev/sdd AIO0



[Nvmf]

  MaxQueuesPerSession 4

  AcceptorPollRate 10000



[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 0

  Listen RDMA 10.3.7.2:4420

  Host nqn.2016-06.io.spdk:init

  SN SPDK00000000000001

  Namespace AIO0





*From initiator I tried discover and connect*



*# nvme discover -t rdma -a 10.3.7.2 -s 4420*



Discovery Log Number of Records 1, Generation counter 4

=====Discovery Log Entry 0======

trtype:  rdma

adrfam:  ipv4

subtype: nvme subsystem

treq:    not specified

portid:  0

trsvcid: 4420

subnqn:  nqn.2016-06.io.spdk:cnode1

traddr:  10.3.7.2

rdma_prtype: not specified

rdma_qptype: connected

rdma_cms:    rdma-cm

rdma_pkey: 0x0000



*# nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 10.3.7.2 -s 4420*

Failed to write to /dev/nvme-fabrics: Input/output error



[dmesg]

[59909.541392] nvme nvme0: new ctrl: NQN
"nqn.2014-08.org.nvmexpress.discovery", addr 10.3.7.2:4420

[59940.389952] nvme nvme0: Connect command failed, error wo/DNR bit: 388



*On target I get the following error*



# app/nvmf_tgt/nvmf_tgt -c /etc/spdk/nvmf.conf  > /tmp/1

EAL: Detected 20 lcore(s)

reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0

copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine
Offload Enabled

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0

rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***

rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on
10.3.7.2 port 4420 ***

conf.c: 500:spdk_nvmf_construct_subsystem: *NOTICE*: Attaching block device
AIO0 to subsystem nqn.2016-06.io.spdk:cnode1

nvmf_tgt.c: 290:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on
socket 0

*request.c: 171:nvmf_process_connect: *ERROR*: Subsystem
'nqn.2016-06.io.spdk:cnode1' does not allow host
'nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14'*





can you please help on why I am not able to connect in this configuration.



Regards,

-kiru

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 29075 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf target configuration issue
@ 2017-09-06 15:28 Harris, James R
  0 siblings, 0 replies; 18+ messages in thread
From: Harris, James R @ 2017-09-06 15:28 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8728 bytes --]

Hi,

Can you confirm that your NVMe device is bound to a uio or vfio driver, and not the kernel NVMe driver?  SPDK will not try to use devices that are still bound to the kernel nvme driver.  You can use scripts/setup.sh to bind all NVMe devices to uio_pci_generic or vfio-pci (based on whether your system has an IOMMU enabled).

lspci –v –s 0000:07:00.0 will show you which kernel module your NVMe device is bound to currently.

Regards,

-Jim


From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Kirubakaran Kaliannan <kirubak(a)zadarastorage.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Wednesday, September 6, 2017 at 12:28 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] NVMf target configuration issue


Hi,

I have the SPDK target working for AIO and Malloc.

I now have configured NVMe device on the same setup, but not able to start the target, as the target is not finding the nvme0 bdev.

Do we have any other dependency for configuring Nvme devices ?

Nvmf.conf

[Global]

[Rpc]
  Enable No
  Listen 127.0.0.1

[Nvmf]
  MaxQueuesPerSession 4
  AcceptorPollRate 10000

[Nvme]
  TransportId "trtype:PCIe traddr:0000:07:00.0" Nvme0
  RetryCount 4
  Timeout 0
  AdminPollRate 100000
  HotplugEnable Yes

[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 0
  Listen RDMA 10.3.7.2:4420<http://10.3.7.2:4420>
  AllowAnyHost Yes
  SN SPDK00000000000001
  Namespace Nvme0n1 1

# app/nvmf_tgt/nvmf_tgt -t all -c /etc/spdk/nvmf.conf
Starting DPDK 17.05.0 initialization...
[ DPDK EAL parameters: nvmf -c 0x1 --file-prefix=spdk_pid97433 ]
EAL: Detected 20 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
Total cores available: 1
Occupied cpu socket mask is 0x1
reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL:   probe driver: 8086:6f20 spdk_ioat
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL:   probe driver: 8086:6f21 spdk_ioat
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL:   probe driver: 8086:6f22 spdk_ioat
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL:   probe driver: 8086:6f23 spdk_ioat
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL:   probe driver: 8086:6f24 spdk_ioat
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL:   probe driver: 8086:6f25 spdk_ioat
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL:   probe driver: 8086:6f26 spdk_ioat
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL:   probe driver: 8086:6f27 spdk_ioat
copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL:   probe driver: 8086:6f20 spdk_ioat
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL:   probe driver: 8086:6f21 spdk_ioat
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL:   probe driver: 8086:6f22 spdk_ioat
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL:   probe driver: 8086:6f23 spdk_ioat
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL:   probe driver: 8086:6f24 spdk_ioat
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL:   probe driver: 8086:6f25 spdk_ioat
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL:   probe driver: 8086:6f26 spdk_ioat
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL:   probe driver: 8086:6f27 spdk_ioat
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
nvmf.c:  67:spdk_nvmf_tgt_init: *INFO*: Max Queue Pairs Per Controller: 4
nvmf.c:  68:spdk_nvmf_tgt_init: *INFO*: Max Queue Depth: 128
nvmf.c:  69:spdk_nvmf_tgt_init: *INFO*: Max In Capsule Data: 4096 bytes
nvmf.c:  70:spdk_nvmf_tgt_init: *INFO*: Max I/O Size: 131072 bytes
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
transport.c:  63:spdk_nvmf_transport_create: *ERROR*: inside spdk_nvmf_transport_create 1
rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***
rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on 10.3.7.2 port 4420 ***
bdev.c: 214:spdk_bdev_get_by_name: *ERROR*: bdev's returned null 'Nvme0n1'
conf.c: 492:spdk_nvmf_construct_subsystem: *ERROR*: Could not find namespace bdev 'Nvme0n1'
subsystem.c: 232:spdk_nvmf_delete_subsystem: *INFO*: subsystem is 0x1218ef0
nvmf_tgt.c: 276:spdk_nvmf_startup: *ERROR*: spdk_nvmf_parse_conf() failed
subsystem.c: 232:spdk_nvmf_delete_subsystem: *INFO*: subsystem is 0x1218bf0

# lspci  |grep -i vola
07:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)

Thanks
-kiru



From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Yang, Ziye
Sent: Tuesday, September 05, 2017 10:41 AM
To: Storage Performance Development Kit
Subject: Re: [SPDK] NVMf target configuration issue

And the other choice is to remove the Host related line, which means any host can connect.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Yang, Ziye
Sent: Tuesday, September 5, 2017 1:04 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] NVMf target configuration issue

Hi Kiru,

According to the error message, I suggest that you add the Host info in the following section.


[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 0
  Listen RDMA 10.3.7.2:4420<http://10.3.7.2:4420>
  Host nqn.2016-06.io.spdk:init
                 Host nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14
  SN SPDK00000000000001
  Namespace AIO0


Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Kirubakaran Kaliannan
Sent: Tuesday, September 5, 2017 12:52 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] NVMf target configuration issue



Hi All,

I am trying to get the nvmf configuration going with SPDK.

I have the following configuration,

# ofed 2.4 on both target and initiator
# SPDK + 3.18 kernel on target
# 4.9 kernel on initiator
# mellanox connect3X-Pro

/etc/spdk/nvmf.conf

[Global]
[Rpc]
  Enable No
  Listen 127.0.0.1
[AIO]
  AIO /dev/sdd AIO0

[Nvmf]
  MaxQueuesPerSession 4
  AcceptorPollRate 10000

[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 0
  Listen RDMA 10.3.7.2:4420<http://10.3.7.2:4420>
  Host nqn.2016-06.io.spdk:init
  SN SPDK00000000000001
  Namespace AIO0


From initiator I tried discover and connect

# nvme discover -t rdma -a 10.3.7.2 -s 4420

Discovery Log Number of Records 1, Generation counter 4
=====Discovery Log Entry 0======
trtype:  rdma
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified
portid:  0
trsvcid: 4420
subnqn:  nqn.2016-06.io.spdk:cnode1
traddr:  10.3.7.2
rdma_prtype: not specified
rdma_qptype: connected
rdma_cms:    rdma-cm
rdma_pkey: 0x0000

# nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 10.3.7.2 -s 4420
Failed to write to /dev/nvme-fabrics: Input/output error

[dmesg]
[59909.541392] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 10.3.7.2:4420<http://10.3.7.2:4420>
[59940.389952] nvme nvme0: Connect command failed, error wo/DNR bit: 388

On target I get the following error

# app/nvmf_tgt/nvmf_tgt -c /etc/spdk/nvmf.conf  > /tmp/1
EAL: Detected 20 lcore(s)
reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***
rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on 10.3.7.2 port 4420 ***
conf.c: 500:spdk_nvmf_construct_subsystem: *NOTICE*: Attaching block device AIO0 to subsystem nqn.2016-06.io.spdk:cnode1
nvmf_tgt.c: 290:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on socket 0
request.c: 171:nvmf_process_connect: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14'


can you please help on why I am not able to connect in this configuration.

Regards,
-kiru


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 31030 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf target configuration issue
@ 2017-09-05 10:38 Kirubakaran Kaliannan
  0 siblings, 0 replies; 18+ messages in thread
From: Kirubakaran Kaliannan @ 2017-09-05 10:38 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4132 bytes --]

Thanks Yang, adding the host line helped.



Regards

-kiru



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Yang, Ziye
*Sent:* Tuesday, September 05, 2017 10:41 AM
*To:* Storage Performance Development Kit
*Subject:* Re: [SPDK] NVMf target configuration issue



And the other choice is to remove the Host related line, which means any
host can connect.



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>] *On
Behalf Of *Yang, Ziye
*Sent:* Tuesday, September 5, 2017 1:04 PM
*To:* Storage Performance Development Kit <spdk(a)lists.01.org>
*Subject:* Re: [SPDK] NVMf target configuration issue



Hi Kiru,



According to the error message, I suggest that you add the Host info in the
following section.





[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 0

  Listen RDMA 10.3.7.2:4420

  Host nqn.2016-06.io.spdk:init

*                 Host
nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14*

  SN SPDK00000000000001

  Namespace AIO0





Thanks.



*From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>] *On
Behalf Of *Kirubakaran Kaliannan
*Sent:* Tuesday, September 5, 2017 12:52 PM
*To:* Storage Performance Development Kit <spdk(a)lists.01.org>
*Subject:* [SPDK] NVMf target configuration issue







Hi All,



I am trying to get the nvmf configuration going with SPDK.



I have the following configuration,



*# ofed 2.4 on both target and initiator*

*# SPDK + 3.18 kernel on target *

*# 4.9 kernel on initiator*

*# mellanox connect3X-Pro*



*/etc/spdk/nvmf.conf*



[Global]

[Rpc]

  Enable No

  Listen 127.0.0.1

[AIO]

  AIO /dev/sdd AIO0



[Nvmf]

  MaxQueuesPerSession 4

  AcceptorPollRate 10000



[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 0

  Listen RDMA 10.3.7.2:4420

  Host nqn.2016-06.io.spdk:init

  SN SPDK00000000000001

  Namespace AIO0





*From initiator I tried discover and connect*



*# nvme discover -t rdma -a 10.3.7.2 -s 4420*



Discovery Log Number of Records 1, Generation counter 4

=====Discovery Log Entry 0======

trtype:  rdma

adrfam:  ipv4

subtype: nvme subsystem

treq:    not specified

portid:  0

trsvcid: 4420

subnqn:  nqn.2016-06.io.spdk:cnode1

traddr:  10.3.7.2

rdma_prtype: not specified

rdma_qptype: connected

rdma_cms:    rdma-cm

rdma_pkey: 0x0000



*# nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 10.3.7.2 -s 4420*

Failed to write to /dev/nvme-fabrics: Input/output error



[dmesg]

[59909.541392] nvme nvme0: new ctrl: NQN
"nqn.2014-08.org.nvmexpress.discovery", addr 10.3.7.2:4420

[59940.389952] nvme nvme0: Connect command failed, error wo/DNR bit: 388



*On target I get the following error*



# app/nvmf_tgt/nvmf_tgt -c /etc/spdk/nvmf.conf  > /tmp/1

EAL: Detected 20 lcore(s)

reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0

copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine
Offload Enabled

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0

rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***

rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on
10.3.7.2 port 4420 ***

conf.c: 500:spdk_nvmf_construct_subsystem: *NOTICE*: Attaching block device
AIO0 to subsystem nqn.2016-06.io.spdk:cnode1

nvmf_tgt.c: 290:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on
socket 0

*request.c: 171:nvmf_process_connect: *ERROR*: Subsystem
'nqn.2016-06.io.spdk:cnode1' does not allow host
'nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14'*





can you please help on why I am not able to connect in this configuration.



Regards,

-kiru

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 10999 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf target configuration issue
@ 2017-09-05  5:10 Yang, Ziye
  0 siblings, 0 replies; 18+ messages in thread
From: Yang, Ziye @ 2017-09-05  5:10 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3606 bytes --]

And the other choice is to remove the Host related line, which means any host can connect.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Yang, Ziye
Sent: Tuesday, September 5, 2017 1:04 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] NVMf target configuration issue

Hi Kiru,

According to the error message, I suggest that you add the Host info in the following section.


[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 0
  Listen RDMA 10.3.7.2:4420<http://10.3.7.2:4420>
  Host nqn.2016-06.io.spdk:init
                 Host nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14
  SN SPDK00000000000001
  Namespace AIO0


Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Kirubakaran Kaliannan
Sent: Tuesday, September 5, 2017 12:52 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] NVMf target configuration issue



Hi All,

I am trying to get the nvmf configuration going with SPDK.

I have the following configuration,

# ofed 2.4 on both target and initiator
# SPDK + 3.18 kernel on target
# 4.9 kernel on initiator
# mellanox connect3X-Pro

/etc/spdk/nvmf.conf

[Global]
[Rpc]
  Enable No
  Listen 127.0.0.1
[AIO]
  AIO /dev/sdd AIO0

[Nvmf]
  MaxQueuesPerSession 4
  AcceptorPollRate 10000

[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 0
  Listen RDMA 10.3.7.2:4420<http://10.3.7.2:4420>
  Host nqn.2016-06.io.spdk:init
  SN SPDK00000000000001
  Namespace AIO0


From initiator I tried discover and connect

# nvme discover -t rdma -a 10.3.7.2 -s 4420

Discovery Log Number of Records 1, Generation counter 4
=====Discovery Log Entry 0======
trtype:  rdma
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified
portid:  0
trsvcid: 4420
subnqn:  nqn.2016-06.io.spdk:cnode1
traddr:  10.3.7.2
rdma_prtype: not specified
rdma_qptype: connected
rdma_cms:    rdma-cm
rdma_pkey: 0x0000

# nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 10.3.7.2 -s 4420
Failed to write to /dev/nvme-fabrics: Input/output error

[dmesg]
[59909.541392] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 10.3.7.2:4420<http://10.3.7.2:4420>
[59940.389952] nvme nvme0: Connect command failed, error wo/DNR bit: 388

On target I get the following error

# app/nvmf_tgt/nvmf_tgt -c /etc/spdk/nvmf.conf  > /tmp/1
EAL: Detected 20 lcore(s)
reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***
rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on 10.3.7.2 port 4420 ***
conf.c: 500:spdk_nvmf_construct_subsystem: *NOTICE*: Attaching block device AIO0 to subsystem nqn.2016-06.io.spdk:cnode1
nvmf_tgt.c: 290:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on socket 0
request.c: 171:nvmf_process_connect: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14'


can you please help on why I am not able to connect in this configuration.

Regards,
-kiru


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 11546 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [SPDK] NVMf target configuration issue
@ 2017-09-05  5:03 Yang, Ziye
  0 siblings, 0 replies; 18+ messages in thread
From: Yang, Ziye @ 2017-09-05  5:03 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3251 bytes --]

Hi Kiru,

According to the error message, I suggest that you add the Host info in the following section.


[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 0
  Listen RDMA 10.3.7.2:4420<http://10.3.7.2:4420>
  Host nqn.2016-06.io.spdk:init
                 Host nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14
  SN SPDK00000000000001
  Namespace AIO0


Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Kirubakaran Kaliannan
Sent: Tuesday, September 5, 2017 12:52 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] NVMf target configuration issue



Hi All,

I am trying to get the nvmf configuration going with SPDK.

I have the following configuration,

# ofed 2.4 on both target and initiator
# SPDK + 3.18 kernel on target
# 4.9 kernel on initiator
# mellanox connect3X-Pro

/etc/spdk/nvmf.conf

[Global]
[Rpc]
  Enable No
  Listen 127.0.0.1
[AIO]
  AIO /dev/sdd AIO0

[Nvmf]
  MaxQueuesPerSession 4
  AcceptorPollRate 10000

[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 0
  Listen RDMA 10.3.7.2:4420<http://10.3.7.2:4420>
  Host nqn.2016-06.io.spdk:init
  SN SPDK00000000000001
  Namespace AIO0


From initiator I tried discover and connect

# nvme discover -t rdma -a 10.3.7.2 -s 4420

Discovery Log Number of Records 1, Generation counter 4
=====Discovery Log Entry 0======
trtype:  rdma
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified
portid:  0
trsvcid: 4420
subnqn:  nqn.2016-06.io.spdk:cnode1
traddr:  10.3.7.2
rdma_prtype: not specified
rdma_qptype: connected
rdma_cms:    rdma-cm
rdma_pkey: 0x0000

# nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 10.3.7.2 -s 4420
Failed to write to /dev/nvme-fabrics: Input/output error

[dmesg]
[59909.541392] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 10.3.7.2:4420<http://10.3.7.2:4420>
[59940.389952] nvme nvme0: Connect command failed, error wo/DNR bit: 388

On target I get the following error

# app/nvmf_tgt/nvmf_tgt -c /etc/spdk/nvmf.conf  > /tmp/1
EAL: Detected 20 lcore(s)
reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***
rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on 10.3.7.2 port 4420 ***
conf.c: 500:spdk_nvmf_construct_subsystem: *NOTICE*: Attaching block device AIO0 to subsystem nqn.2016-06.io.spdk:cnode1
nvmf_tgt.c: 290:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on socket 0
request.c: 171:nvmf_process_connect: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14'


can you please help on why I am not able to connect in this configuration.

Regards,
-kiru


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 10615 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [SPDK] NVMf target configuration issue
@ 2017-09-05  4:51 Kirubakaran Kaliannan
  0 siblings, 0 replies; 18+ messages in thread
From: Kirubakaran Kaliannan @ 2017-09-05  4:51 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2736 bytes --]

Hi All,



I am trying to get the nvmf configuration going with SPDK.



I have the following configuration,



*# ofed 2.4 on both target and initiator*

*# SPDK + 3.18 kernel on target *

*# 4.9 kernel on initiator*

*# mellanox connect3X-Pro*



*/etc/spdk/nvmf.conf*



[Global]

[Rpc]

  Enable No

  Listen 127.0.0.1

[AIO]

  AIO /dev/sdd AIO0



[Nvmf]

  MaxQueuesPerSession 4

  AcceptorPollRate 10000



[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 0

  Listen RDMA 10.3.7.2:4420

  Host nqn.2016-06.io.spdk:init

  SN SPDK00000000000001

  Namespace AIO0





*From initiator I tried discover and connect*



*# nvme discover -t rdma -a 10.3.7.2 -s 4420*



Discovery Log Number of Records 1, Generation counter 4

=====Discovery Log Entry 0======

trtype:  rdma

adrfam:  ipv4

subtype: nvme subsystem

treq:    not specified

portid:  0

trsvcid: 4420

subnqn:  nqn.2016-06.io.spdk:cnode1

traddr:  10.3.7.2

rdma_prtype: not specified

rdma_qptype: connected

rdma_cms:    rdma-cm

rdma_pkey: 0x0000



*# nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 10.3.7.2 -s 4420*

Failed to write to /dev/nvme-fabrics: Input/output error



[dmesg]

[59909.541392] nvme nvme0: new ctrl: NQN
"nqn.2014-08.org.nvmexpress.discovery", addr 10.3.7.2:4420

[59940.389952] nvme nvme0: Connect command failed, error wo/DNR bit: 388



*On target I get the following error*



# app/nvmf_tgt/nvmf_tgt -c /etc/spdk/nvmf.conf  > /tmp/1

EAL: Detected 20 lcore(s)

reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0

copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine
Offload Enabled

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0

nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0

rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***

rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on
10.3.7.2 port 4420 ***

conf.c: 500:spdk_nvmf_construct_subsystem: *NOTICE*: Attaching block device
AIO0 to subsystem nqn.2016-06.io.spdk:cnode1

nvmf_tgt.c: 290:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on
socket 0

*request.c: 171:nvmf_process_connect: *ERROR*: Subsystem
'nqn.2016-06.io.spdk:cnode1' does not allow host
'nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14'*





can you please help on why I am not able to connect in this configuration.



Regards,

-kiru

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 6410 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-10-18  6:17 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-06  7:28 [SPDK] NVMf target configuration issue Kirubakaran Kaliannan
  -- strict thread matches above, loose matches on Subject: below --
2017-10-18  6:17 [SPDK] NVMf Target " Gyan Prakash
2017-10-18  1:17 Gyan Prakash
2017-10-18  1:06 Gyan Prakash
2017-10-18  0:54 Gyan Prakash
2017-10-18  0:51 Gyan Prakash
2017-10-18  0:51 Liu, Changpeng
2017-10-18  0:41 Harris, James R
2017-10-18  0:36 Gyan Prakash
2017-10-17 20:03 Gyan Prakash
2017-10-17 19:54 Verkamp, Daniel
2017-10-17 19:04 Gyan Prakash
2017-09-07  8:29 [SPDK] NVMf target " Kirubakaran Kaliannan
2017-09-06 15:28 Harris, James R
2017-09-05 10:38 Kirubakaran Kaliannan
2017-09-05  5:10 Yang, Ziye
2017-09-05  5:03 Yang, Ziye
2017-09-05  4:51 Kirubakaran Kaliannan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.