All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] NVMeF Taget Setup
@ 2017-03-01 20:30 yzhu
  0 siblings, 0 replies; 4+ messages in thread
From: yzhu @ 2017-03-01 20:30 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2056 bytes --]

Hi all,

I got an error when trying to set up the NVMeF Target. The error says 
"could not found NVMe controller at PCI address xxxxx". However, I do 
have a NVMe drive with our system. I was wondering if someone has an 
idea about this. I attached my configuration and error I got below.

1. Configuration
# Direct controller
[Subsystem1]
   NQN nqn.2016-06.io.spdk:cnode1
   Core 0
   Mode Direct
   Listen RDMA 10.10.16.10:4420
   Host nqn.2016-06.io.spdk:init
   NVMe 0000:04:00.0


2. Error
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c 1 --file-prefix=spdk12027 
--base-virtaddr=0x1000000000 --proc-type=auto ]
EAL: Detected 40 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Occupied cpu core mask is 0x1
Occupied cpu socket mask is 0x1
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Probing device 0000:04:00.0
Total cores available: 1
Reactor started on core 0 on socket 0
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on 
socket 0
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
Subsystem nqn.2016-06.io.spdk:cnode1 is configured to run on a CPU core 
0 belonging to a different NUMA node than the associated NIC. This may 
result in reduced performance.
The NIC is on socket 1
The Subsystem is on socket 0
conf.c: 565:spdk_nvmf_construct_subsystem: ***ERROR*** Could not find 
NVMe controller at PCI address 0000:04:00.0
nvmf_tgt.c: 336:spdk_nvmf_startup: ***ERROR*** spdk_nvmf_parse_conf() 
failed
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: Requested device 0000:04:00.0 cannot be used

Thanks in advance,
Yue

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [SPDK] NVMeF Taget Setup
@ 2017-03-02 17:52 Walker, Benjamin
  0 siblings, 0 replies; 4+ messages in thread
From: Walker, Benjamin @ 2017-03-02 17:52 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 883 bytes --]

On Thu, 2017-03-02 at 12:37 -0500, yzhu wrote:
> request.c: 244:nvmf_process_connect: ***ERROR*** Subsystem 
> 'nqn.2016-06.io.spdk:cnode1' does not allow host 
> 'nqn.2014-08.org.nvmexpress:NVMf:uuid:0916cbc8-b8db-4d6e-9c89-
> cd26cadddfa9'

The error is telling you that your initiator (nqn.2014-
08.org.nvmexpress:NVMf:uuid:0916cbc8-b8db-4d6e-9c89-cd26cadddfa9) does
not have permission to access the subsystem.

> 
> 4. Configurations
> [Subsystem1]
>    NQN nqn.2016-06.io.spdk:cnode1
>    Core 0
>    Mode Direct
>    Listen RDMA 10.10.16.10:4420
>    Host nqn.2016-06.io.spdk:init
>    NVMe 0000:04:00.0

In your configuration above, you'd specified that only a host
(initiator) with NQN nqn.2016-06.io.spdk:init is allowed to connect.
The "Host" directive is optional, so just delete it and it will allow
any initiator to connect.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 3274 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [SPDK] NVMeF Taget Setup
@ 2017-03-02 17:37 yzhu
  0 siblings, 0 replies; 4+ messages in thread
From: yzhu @ 2017-03-02 17:37 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5531 bytes --]

Thanks for help. I had another error when trying to connect to the NVMf 
target from the client. The NVMf target listened on the right port, but 
the client could not connect to it even the target is discovered. (I 
installed all necessary modules including nvme, nvmet, nvme-rdma, and 
nvmet-rdma.)

1. Discovery
# nvme discover -t rdma -a 10.10.16.10 -s 4420

Discovery Log Number of Records 1, Generation counter 4
=====Discovery Log Entry 0======
trtype:  rdma
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified
portid:  0
trsvcid: 4420
subnqn:  nqn.2016-06.io.spdk:cnode1
traddr:  10.10.16.10
rdma_prtype: not specified
rdma_qptype: connected
rdma_cms:    rdma-cm
rdma_pkey: 0x0000

2. Error when connect
# nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.10.16.10 -s 
4420
Failed to write to /dev/nvme-fabrics: Input/output error

3. Error on the Target side
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c 1 --file-prefix=spdk8590 
--base-virtaddr=0x1000000000 --proc-type=auto ]
EAL: Detected 40 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Occupied cpu core mask is 0x1
Occupied cpu socket mask is 0x1
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Ioat Copy Engine Offload Enabled
Total cores available: 1
Reactor started on core 0 on socket 0
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on 
socket 0
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
Subsystem nqn.2016-06.io.spdk:cnode1 is configured to run on a CPU core 
0 belonging to a different NUMA node than the associated NIC. This may 
result in reduced performance.
The NIC is on socket 1
The Subsystem is on socket 0
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
Attaching NVMe device 0x102fe5b380 at 0000:04:00.0 to subsystem 
nqn.2016-06.io.spdk:cnode1
Acceptor running on core 0 on socket 0
*** NVMf Target Listening on 10.10.16.10 port 4420 ***
request.c: 244:nvmf_process_connect: ***ERROR*** Subsystem 
'nqn.2016-06.io.spdk:cnode1' does not allow host 
'nqn.2014-08.org.nvmexpress:NVMf:uuid:0916cbc8-b8db-4d6e-9c89-cd26cadddfa9'

4. Configurations
# Direct controller
[Subsystem1]
   NQN nqn.2016-06.io.spdk:cnode1
   Core 0
   Mode Direct
   Listen RDMA 10.10.16.10:4420
   Host nqn.2016-06.io.spdk:init
   NVMe 0000:04:00.0

Thanks in advance.

Best regards,
Yue


On 2017-03-01 15:45, Walker, Benjamin wrote:
> Try deleting the [Nvme] section from your configuration file. This is a
>  known bug with direct mode that will be fixed shortly.
> 
> On Wed, 2017-03-01 at 15:30 -0500, yzhu wrote:
>> Hi all,
>> 
>> I got an error when trying to set up the NVMeF Target. The error
>> says 
>> "could not found NVMe controller at PCI address xxxxx". However, I
>> do 
>> have a NVMe drive with our system. I was wondering if someone has an 
>> idea about this. I attached my configuration and error I got below.
>> 
>> 1. Configuration
>> # Direct controller
>> [Subsystem1]
>>    NQN nqn.2016-06.io.spdk:cnode1
>>    Core 0
>>    Mode Direct
>>    Listen RDMA 10.10.16.10:4420
>>    Host nqn.2016-06.io.spdk:init
>>    NVMe 0000:04:00.0
>> 
>> 
>> 2. Error
>> Starting Intel(R) DPDK initialization ...
>> [ DPDK EAL parameters: nvmf -c 1 --file-prefix=spdk12027 
>> --base-virtaddr=0x1000000000 --proc-type=auto ]
>> EAL: Detected 40 lcore(s)
>> EAL: Auto-detected process type: PRIMARY
>> EAL: No free hugepages reported in hugepages-1048576kB
>> EAL: Probing VFIO support...
>> EAL: PCI device 0000:04:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:953 spdk_nvme
>> Occupied cpu core mask is 0x1
>> Occupied cpu socket mask is 0x1
>> EAL: PCI device 0000:04:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:953 spdk_nvme
>> Ioat Copy Engine Offload Enabled
>> EAL: PCI device 0000:04:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:953 spdk_nvme
>> Probing device 0000:04:00.0
>> Total cores available: 1
>> Reactor started on core 0 on socket 0
>> *** RDMA Transport Init ***
>> allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0
>> on 
>> socket 0
>> allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
>> Subsystem nqn.2016-06.io.spdk:cnode1 is configured to run on a CPU
>> core 
>> 0 belonging to a different NUMA node than the associated NIC. This
>> may 
>> result in reduced performance.
>> The NIC is on socket 1
>> The Subsystem is on socket 0
>> conf.c: 565:spdk_nvmf_construct_subsystem: ***ERROR*** Could not
>> find 
>> NVMe controller at PCI address 0000:04:00.0
>> nvmf_tgt.c: 336:spdk_nvmf_startup: ***ERROR***
>> spdk_nvmf_parse_conf() 
>> failed
>> EAL: PCI device 0000:04:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:953 spdk_nvme
>> EAL: Requested device 0000:04:00.0 cannot be used
>> 
>> Thanks in advance,
>> Yue
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [SPDK] NVMeF Taget Setup
@ 2017-03-01 20:45 Walker, Benjamin
  0 siblings, 0 replies; 4+ messages in thread
From: Walker, Benjamin @ 2017-03-01 20:45 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2551 bytes --]

Try deleting the [Nvme] section from your configuration file. This is a
 known bug with direct mode that will be fixed shortly.

On Wed, 2017-03-01 at 15:30 -0500, yzhu wrote:
> Hi all,
> 
> I got an error when trying to set up the NVMeF Target. The error
> says 
> "could not found NVMe controller at PCI address xxxxx". However, I
> do 
> have a NVMe drive with our system. I was wondering if someone has an 
> idea about this. I attached my configuration and error I got below.
> 
> 1. Configuration
> # Direct controller
> [Subsystem1]
>    NQN nqn.2016-06.io.spdk:cnode1
>    Core 0
>    Mode Direct
>    Listen RDMA 10.10.16.10:4420
>    Host nqn.2016-06.io.spdk:init
>    NVMe 0000:04:00.0
> 
> 
> 2. Error
> Starting Intel(R) DPDK initialization ...
> [ DPDK EAL parameters: nvmf -c 1 --file-prefix=spdk12027 
> --base-virtaddr=0x1000000000 --proc-type=auto ]
> EAL: Detected 40 lcore(s)
> EAL: Auto-detected process type: PRIMARY
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:953 spdk_nvme
> Occupied cpu core mask is 0x1
> Occupied cpu socket mask is 0x1
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:953 spdk_nvme
> Ioat Copy Engine Offload Enabled
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:953 spdk_nvme
> Probing device 0000:04:00.0
> Total cores available: 1
> Reactor started on core 0 on socket 0
> *** RDMA Transport Init ***
> allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0
> on 
> socket 0
> allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
> Subsystem nqn.2016-06.io.spdk:cnode1 is configured to run on a CPU
> core 
> 0 belonging to a different NUMA node than the associated NIC. This
> may 
> result in reduced performance.
> The NIC is on socket 1
> The Subsystem is on socket 0
> conf.c: 565:spdk_nvmf_construct_subsystem: ***ERROR*** Could not
> find 
> NVMe controller at PCI address 0000:04:00.0
> nvmf_tgt.c: 336:spdk_nvmf_startup: ***ERROR***
> spdk_nvmf_parse_conf() 
> failed
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:953 spdk_nvme
> EAL: Requested device 0000:04:00.0 cannot be used
> 
> Thanks in advance,
> Yue
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 3274 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-03-02 17:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-01 20:30 [SPDK] NVMeF Taget Setup yzhu
2017-03-01 20:45 Walker, Benjamin
2017-03-02 17:37 yzhu
2017-03-02 17:52 Walker, Benjamin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.