All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] nvmf.conf: AcceptorCore Vs Core
@ 2016-11-21 20:06 Raj Pandurangan
  0 siblings, 0 replies; 8+ messages in thread
From: Raj Pandurangan @ 2016-11-21 20:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10015 bytes --]

Hello John et al,

Looking at your log below, you got “RDMA transport init” twice as you had two RNICs.  In my case,  “NUM_TRANSPORTS” in transport.c has value of 1.  Thus it only one of my RDMA gets initialized.

Not quite sure what would be the right fix so that it can initialize both of my RNIC devices?

Thanks,

From: Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 3:12 PM
To: Storage Performance Development Kit
Subject: RE: nvmf.conf: AcceptorCore Vs Core

Thanks for the details John.  Though it helps, I think I’m still missing something more.

Here is the latest output from nvmf_tgt.

/rajp/spdk# app/nvmf_tgt/nvmf_tgt -c etc/spdk/nvmf.conf -p 15
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000 -n 4 -m 2048 --master-lcore=15 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 48 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000
Occupied cpu socket mask is 0x3
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 0xc
Reactor started on core 0xd
Reactor started on core 0xe
Reactor started on core 0xf
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 15
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 13
*** NVMf Target Listening on 101.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d65b6c0 at 0:6:0.0 to subsystem 0x1acf750
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 14
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NIC. This may result in reduced performance.
*** NVMf Target Listening on 100.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d637700 at 0:88:0.0 to subsystem 0x1ad5900
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.
Acceptor running on core 12


Here is conf file:

# NVMf Target Configuration File
#
# Please write all parameters using ASCII.
# The parameter must be quoted if it includes whitespace.
#
# Configuration syntax:
# Leading whitespace is ignored.
# Lines starting with '#' are comments.
# Lines ending with '\' are concatenated with the next line.
# Bracketed ([]) names define sections

[Global]
  # Users can restrict work items to only run on certain cores by
  #  specifying a ReactorMask.  Default ReactorMask mask is defined as
  #  -c option in the 'ealargs' setting at beginning of file nvmf_tgt.c.
  #ReactorMask 0x00FF
  ReactorMask 0x00F000

  # Tracepoint group mask for spdk trace buffers
  # Default: 0x0 (all tracepoint groups disabled)
  # Set to 0xFFFFFFFFFFFFFFFF to enable all tracepoint groups.
  #TpointGroupMask 0x0

  # syslog facility
  LogFacility "local7"

[Rpc]
  # Defines whether to enable configuration via RPC.
  # Default is disabled.  Note that the RPC interface is not
  # authenticated, so users should be careful about enabling
  # RPC in non-trusted environments.
  Enable No

# Users may change this section to create a different number or size of
#  malloc LUNs.
# This will generate 8 LUNs with a malloc-allocated backend.
# Each LUN will be size 64MB and these will be named
# Malloc0 through Malloc7.  Not all LUNs defined here are necessarily
#  used below.
[Malloc]
  NumberOfLuns 8
  LunSizeInMB 64

# Define NVMf protocol global options
[Nvmf]
  # Set the maximum number of submission and completion queues per session.
  # Setting this to '8', for example, allows for 8 submission and 8 completion queues
  # per session.
  MaxQueuesPerSession 128

  # Set the maximum number of outstanding I/O per queue.
  #MaxQueueDepth 128

  # Set the maximum in-capsule data size. Must be a multiple of 16.
  #InCapsuleDataSize 4096

  # Set the maximum I/O size. Must be a multiple of 4096.
  #MaxIOSize 131072

  # Set the global acceptor lcore ID, lcores are numbered starting at 0.
  AcceptorCore 12

  # Set how often the acceptor polls for incoming connections. The acceptor is also
  # responsible for polling existing connections that have gone idle. 0 means continuously
  # poll. Units in microseconds.
  #AcceptorPollRate  1000
  AcceptorPollRate  0

# Define an NVMf Subsystem.
# Direct controller
[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 13
  Mode Direct
  Listen RDMA 101.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:06:00.0

[Subsystem2]
  NQN nqn.2016-06.io.spdk:cnode2
  Core 14
  Mode Direct
  Listen RDMA 100.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:88:00.0


My NUMA nodes and core:
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47

Thanks,
-Rajinikanth
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Kariuki, John K
Sent: Thursday, November 17, 2016 1:34 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

Raj
What is your Reactor Mask?
Here is an example of settings that I have used in my system to successfully assign work items to different cores.

1)     Set the reactor mask to ReactorMask 0xF000000 in the conf file to use cores 24, 25, 26 and 27 for SPDK.

The ReactorMask restricts work items to only run on certain cores.

2)     Put the acceptor on core 24 in conf file: AcceptorCore 24

3)     Put my subsystems on Core 25 and 26

[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 25

  Mode Direct

  Listen RDMA 192.168.100.8:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:81:00.0



# Multiple subsystems are allowed.

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Core 26

  Mode Direct

  Listen RDMA 192.168.100.9:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:86:00.0



4)     Put the master on core 27 using –p at command line ./nvmf_tgt -c nvmf.conf.coreaffinity -p 27

When the nvmf target starts I get the following output
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000000 -n 4 -m 2048 --master-lcore=27 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 96 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000000
Occupied cpu socket mask is 0x2
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 24 on socket 1
Reactor started on core 25 on socket 1
Reactor started on core 27 on socket 1
Reactor started on core 26 on socket 1
*** RDMA Transport Init ***
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 27 on socket 1
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 25 on socket 1
*** NVMf Target Listening on 192.168.100.8 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c38b80 at 0:81:0.0 to subsystem nqn.2016-06.io.spdk:cnode1
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 26 on socket 1
*** NVMf Target Listening on 192.168.100.9 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c17880 at 0:86:0.0 to subsystem nqn.2016-06.io.spdk:cnode2
Acceptor running on core 24 on socket 1

Hope this helps.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 12:52 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvmf.conf: AcceptorCore Vs Core

Hello,

I have a server with two NUMA nodes.  On each node, configured a NIC.

In nvmf.conf file, based on the node configuration, would like to assign right lcore.

Here is snippet of nvmf.conf:
…
..
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
Mode Direct
Listen RDMA 100.10.10.180:4420
NVMe 0000:06:00.0


[Subsystem2]
NQN nqn.2016-06.io.spdk:cnode2
Core 1
Mode Direct
Listen RDMA 101.10.10.180:4420
NVMe 0000:86:00.0


But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.

Following warning confirms it’s uses lcore 0.

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0
“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”

Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.

It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.

Thanks,



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 53766 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [SPDK] nvmf.conf: AcceptorCore Vs Core
@ 2016-11-23 16:37 Walker, Benjamin
  0 siblings, 0 replies; 8+ messages in thread
From: Walker, Benjamin @ 2016-11-23 16:37 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 11775 bytes --]

On Tue, 2016-11-22 at 18:14 +0000, Raj (Rajinikanth) Pandurangan wrote:
> Just to update you all, latest DPDK (16.11) fixes RDMA init issue that I was
> facing with two RNICs.

The latest version of SPDK on master requires DPDK 16.11 to correctly enumerate
more than one NVMe device. This may be addressed prior to the next release -
we're looking at it now. The most recent SPDK release can run with older
versions of DPDK though. I think what you are seeing is that because only one
NVMe device is found, the system only initializes the NIC that it is going to
use with that device.

>  
> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth)
> Pandurangan
> Sent: Monday, November 21, 2016 12:07 PM
> To: Storage Performance Development Kit
> Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core
>  
> Hello John et al,
>  
> Looking at your log below, you got “RDMA transport init” twice as you had two
> RNICs.  In my case,  “NUM_TRANSPORTS” in transport.c has value of 1.  Thus it
> only one of my RDMA gets initialized.
>  
> Not quite sure what would be the right fix so that it can initialize both of
> my RNIC devices?
>  
> Thanks,
>  
> From: Raj (Rajinikanth) Pandurangan 
> Sent: Thursday, November 17, 2016 3:12 PM
> To: Storage Performance Development Kit
> Subject: RE: nvmf.conf: AcceptorCore Vs Core
>  
> Thanks for the details John.  Though it helps, I think I’m still missing
> something more.
>  
> Here is the latest output from nvmf_tgt.
>  
> /rajp/spdk# app/nvmf_tgt/nvmf_tgt -c etc/spdk/nvmf.conf -p 15
> Starting Intel(R) DPDK initialization ...
> [ DPDK EAL parameters: nvmf -c f000 -n 4 -m 2048 --master-lcore=15 --file-
> prefix=rte0 --proc-type=auto ]
> EAL: Detected 48 lcore(s)
> EAL: Auto-detected process type: PRIMARY
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> done.
> Occupied cpu core mask is 0xf000
> Occupied cpu socket mask is 0x3
> Ioat Copy Engine Offload Enabled
> Total cores available: 4
> Reactor started on core 0xc
> Reactor started on core 0xd
> Reactor started on core 0xe
> Reactor started on core 0xf
> *** RDMA Transport Init ***
> allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 15
> allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 13
> *** NVMf Target Listening on 101.10.10.180 port 4420 ***
> EAL: PCI device 0000:06:00.0 on NUMA socket 0
> EAL:   probe driver: 144d:a821 SPDK NVMe
> EAL: PCI device 0000:07:00.0 on NUMA socket 0
> EAL:   probe driver: 144d:a821 SPDK NVMe
> EAL: PCI device 0000:88:00.0 on NUMA socket 1
> EAL:   probe driver: 144d:a821 SPDK NVMe
> EAL: PCI device 0000:89:00.0 on NUMA socket 1
> EAL:   probe driver: 144d:a821 SPDK NVMe
> Attaching NVMe device 0x7ff98d65b6c0 at 0:6:0.0 to subsystem 0x1acf750
> allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 14
> Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core
> belonging to a different NUMA node than the associated NIC. This may result in
> reduced performance.
> *** NVMf Target Listening on 100.10.10.180 port 4420 ***
> EAL: PCI device 0000:06:00.0 on NUMA socket 0
> EAL:   probe driver: 144d:a821 SPDK NVMe
> EAL: PCI device 0000:07:00.0 on NUMA socket 0
> EAL:   probe driver: 144d:a821 SPDK NVMe
> EAL: PCI device 0000:88:00.0 on NUMA socket 1
> EAL:   probe driver: 144d:a821 SPDK NVMe
> EAL: PCI device 0000:89:00.0 on NUMA socket 1
> EAL:   probe driver: 144d:a821 SPDK NVMe
> Attaching NVMe device 0x7ff98d637700 at 0:88:0.0 to subsystem 0x1ad5900
> Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core
> belonging to a different NUMA node than the associated NVMe device. This may
> result in reduced performance.
> Acceptor running on core 12
>  
>  
> Here is conf file:
>  
> # NVMf Target Configuration File
> #
> # Please write all parameters using ASCII.
> # The parameter must be quoted if it includes whitespace.
> #
> # Configuration syntax:
> # Leading whitespace is ignored.
> # Lines starting with '#' are comments.
> # Lines ending with '\' are concatenated with the next line.
> # Bracketed ([]) names define sections
>  
> [Global]
>   # Users can restrict work items to only run on certain cores by
>   #  specifying a ReactorMask.  Default ReactorMask mask is defined as
>   #  -c option in the 'ealargs' setting at beginning of file nvmf_tgt.c.
>   #ReactorMask 0x00FF
>   ReactorMask 0x00F000
>  
>   # Tracepoint group mask for spdk trace buffers
>   # Default: 0x0 (all tracepoint groups disabled)
>   # Set to 0xFFFFFFFFFFFFFFFF to enable all tracepoint groups.
>   #TpointGroupMask 0x0
>  
>   # syslog facility
>   LogFacility "local7"
>  
> [Rpc]
>   # Defines whether to enable configuration via RPC.
>   # Default is disabled.  Note that the RPC interface is not
>   # authenticated, so users should be careful about enabling
>   # RPC in non-trusted environments.
>   Enable No
>  
> # Users may change this section to create a different number or size of
> #  malloc LUNs.
> # This will generate 8 LUNs with a malloc-allocated backend.
> # Each LUN will be size 64MB and these will be named
> # Malloc0 through Malloc7.  Not all LUNs defined here are necessarily
> #  used below.
> [Malloc]
>   NumberOfLuns 8
>   LunSizeInMB 64
>  
> # Define NVMf protocol global options
> [Nvmf]
>   # Set the maximum number of submission and completion queues per session.
>   # Setting this to '8', for example, allows for 8 submission and 8 completion
> queues
>   # per session.
>   MaxQueuesPerSession 128
>  
>   # Set the maximum number of outstanding I/O per queue.
>   #MaxQueueDepth 128
>  
>   # Set the maximum in-capsule data size. Must be a multiple of 16.
>   #InCapsuleDataSize 4096
>  
>   # Set the maximum I/O size. Must be a multiple of 4096.
>   #MaxIOSize 131072
>  
>   # Set the global acceptor lcore ID, lcores are numbered starting at 0.
>   AcceptorCore 12
>  
>   # Set how often the acceptor polls for incoming connections. The acceptor is
> also
>   # responsible for polling existing connections that have gone idle. 0 means
> continuously
>   # poll. Units in microseconds.
>   #AcceptorPollRate  1000
>   AcceptorPollRate  0
>  
> # Define an NVMf Subsystem.
> # Direct controller
> [Subsystem1]
>   NQN nqn.2016-06.io.spdk:cnode1
>   Core 13
>   Mode Direct
>   Listen RDMA 101.10.10.180:4420
> #  Host nqn.2016-06.io.spdk:init
>   NVMe 0000:06:00.0
>  
> [Subsystem2]
>   NQN nqn.2016-06.io.spdk:cnode2
>   Core 14
>   Mode Direct
>   Listen RDMA 100.10.10.180:4420
> #  Host nqn.2016-06.io.spdk:init
>   NVMe 0000:88:00.0
>  
>  
> My NUMA nodes and core:
> NUMA node0 CPU(s):    
> 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
> NUMA node1 CPU(s):    
> 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47
>  
> Thanks,
> -Rajinikanth
> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Kariuki, John K
> Sent: Thursday, November 17, 2016 1:34 PM
> To: Storage Performance Development Kit
> Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core
>  
> Raj
> What is your Reactor Mask?
> Here is an example of settings that I have used in my system to successfully
> assign work items to different cores.
> 1)     Set the reactor mask to ReactorMask 0xF000000 in the conf file to use
> cores 24, 25, 26 and 27 for SPDK.
> The ReactorMask restricts work items to only run on certain cores.
> 2)     Put the acceptor on core 24 in conf file: AcceptorCore 24
> 3)     Put my subsystems on Core 25 and 26
> [Subsystem1]
>   NQN nqn.2016-06.io.spdk:cnode1
>   Core 25
>   Mode Direct
>   Listen RDMA 192.168.100.8:4420
>   Host nqn.2016-06.io.spdk:init
>   NVMe 0000:81:00.0
>  
> # Multiple subsystems are allowed.
> [Subsystem2]
>   NQN nqn.2016-06.io.spdk:cnode2
>   Core 26
>   Mode Direct
>   Listen RDMA 192.168.100.9:4420
>   Host nqn.2016-06.io.spdk:init
>   NVMe 0000:86:00.0
>  
> 4)     Put the master on core 27 using –p at command line ./nvmf_tgt -c
> nvmf.conf.coreaffinity -p 27
>  
> When the nvmf target starts I get the following output
> Starting Intel(R) DPDK initialization ...
> [ DPDK EAL parameters: nvmf -c f000000 -n 4 -m 2048 --master-lcore=27 --file-
> prefix=rte0 --proc-type=auto ]
> EAL: Detected 96 lcore(s)
> EAL: Auto-detected process type: PRIMARY
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> done.
> Occupied cpu core mask is 0xf000000
> Occupied cpu socket mask is 0x2
> Ioat Copy Engine Offload Enabled
> Total cores available: 4
> Reactor started on core 24 on socket 1
> Reactor started on core 25 on socket 1
> Reactor started on core 27 on socket 1
> Reactor started on core 26 on socket 1
> *** RDMA Transport Init ***
> *** RDMA Transport Init ***
> allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 27 on socket
> 1
> allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 25 on socket 1
> *** NVMf Target Listening on 192.168.100.8 port 4420 ***
> EAL: PCI device 0000:81:00.0 on NUMA socket 1
> EAL:   probe driver: 8086:953 SPDK NVMe
> EAL: PCI device 0000:86:00.0 on NUMA socket 1
> EAL:   probe driver: 8086:953 SPDK NVMe
> Attaching NVMe device 0x7f9256c38b80 at 0:81:0.0 to subsystem nqn.2016-
> 06.io.spdk:cnode1
> allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 26 on socket 1
> *** NVMf Target Listening on 192.168.100.9 port 4420 ***
> EAL: PCI device 0000:81:00.0 on NUMA socket 1
> EAL:   probe driver: 8086:953 SPDK NVMe
> EAL: PCI device 0000:86:00.0 on NUMA socket 1
> EAL:   probe driver: 8086:953 SPDK NVMe
> Attaching NVMe device 0x7f9256c17880 at 0:86:0.0 to subsystem nqn.2016-
> 06.io.spdk:cnode2
> Acceptor running on core 24 on socket 1
>  
> Hope this helps.
>  
> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth)
> Pandurangan
> Sent: Thursday, November 17, 2016 12:52 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] nvmf.conf: AcceptorCore Vs Core
>  
> Hello,
>  
> I have a server with two NUMA nodes.  On each node, configured a NIC.
>  
> In nvmf.conf file, based on the node configuration, would like to assign right
> lcore.
>  
> Here is snippet of nvmf.conf:
> …
> ..
> [Subsystem1]
> NQN nqn.2016-06.io.spdk:cnode1
> Core 0
> Mode Direct
> Listen RDMA 100.10.10.180:4420
> NVMe 0000:06:00.0
>  
>  
> [Subsystem2]
> NQN nqn.2016-06.io.spdk:cnode2
> Core 1
> Mode Direct
> Listen RDMA 101.10.10.180:4420
> NVMe 0000:86:00.0
>  
>  
> But noticed that it’s always uses “core 0” for both the Subsystems no matter
> what the value assigned to “Core” under “subsystem” section.
>  
> Following warning confirms it’s uses lcore 0.
>  
> allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0
> “Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core
> belonging to a different NUMA node than the associated NVMe device. This may
> result in reduced performance.”
>  
> Also getting “Segment Fault” if I try to set any non-zero value to
> “AcceptorCore”.
>  
> It would be nice if any of you could give more insights about “AcceptorCore”
> and “Core <lcore>”.
>  
> Thanks,
>  
>  
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [SPDK] nvmf.conf: AcceptorCore Vs Core
@ 2016-11-22 18:14 Raj Pandurangan
  0 siblings, 0 replies; 8+ messages in thread
From: Raj Pandurangan @ 2016-11-22 18:14 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10348 bytes --]

Just to update you all, latest DPDK (16.11) fixes RDMA init issue that I was facing with two RNICs.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Monday, November 21, 2016 12:07 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

Hello John et al,

Looking at your log below, you got “RDMA transport init” twice as you had two RNICs.  In my case,  “NUM_TRANSPORTS” in transport.c has value of 1.  Thus it only one of my RDMA gets initialized.

Not quite sure what would be the right fix so that it can initialize both of my RNIC devices?

Thanks,

From: Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 3:12 PM
To: Storage Performance Development Kit
Subject: RE: nvmf.conf: AcceptorCore Vs Core

Thanks for the details John.  Though it helps, I think I’m still missing something more.

Here is the latest output from nvmf_tgt.

/rajp/spdk# app/nvmf_tgt/nvmf_tgt -c etc/spdk/nvmf.conf -p 15
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000 -n 4 -m 2048 --master-lcore=15 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 48 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000
Occupied cpu socket mask is 0x3
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 0xc
Reactor started on core 0xd
Reactor started on core 0xe
Reactor started on core 0xf
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 15
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 13
*** NVMf Target Listening on 101.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d65b6c0 at 0:6:0.0 to subsystem 0x1acf750
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 14
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NIC. This may result in reduced performance.
*** NVMf Target Listening on 100.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d637700 at 0:88:0.0 to subsystem 0x1ad5900
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.
Acceptor running on core 12


Here is conf file:

# NVMf Target Configuration File
#
# Please write all parameters using ASCII.
# The parameter must be quoted if it includes whitespace.
#
# Configuration syntax:
# Leading whitespace is ignored.
# Lines starting with '#' are comments.
# Lines ending with '\' are concatenated with the next line.
# Bracketed ([]) names define sections

[Global]
  # Users can restrict work items to only run on certain cores by
  #  specifying a ReactorMask.  Default ReactorMask mask is defined as
  #  -c option in the 'ealargs' setting at beginning of file nvmf_tgt.c.
  #ReactorMask 0x00FF
  ReactorMask 0x00F000

  # Tracepoint group mask for spdk trace buffers
  # Default: 0x0 (all tracepoint groups disabled)
  # Set to 0xFFFFFFFFFFFFFFFF to enable all tracepoint groups.
  #TpointGroupMask 0x0

  # syslog facility
  LogFacility "local7"

[Rpc]
  # Defines whether to enable configuration via RPC.
  # Default is disabled.  Note that the RPC interface is not
  # authenticated, so users should be careful about enabling
  # RPC in non-trusted environments.
  Enable No

# Users may change this section to create a different number or size of
#  malloc LUNs.
# This will generate 8 LUNs with a malloc-allocated backend.
# Each LUN will be size 64MB and these will be named
# Malloc0 through Malloc7.  Not all LUNs defined here are necessarily
#  used below.
[Malloc]
  NumberOfLuns 8
  LunSizeInMB 64

# Define NVMf protocol global options
[Nvmf]
  # Set the maximum number of submission and completion queues per session.
  # Setting this to '8', for example, allows for 8 submission and 8 completion queues
  # per session.
  MaxQueuesPerSession 128

  # Set the maximum number of outstanding I/O per queue.
  #MaxQueueDepth 128

  # Set the maximum in-capsule data size. Must be a multiple of 16.
  #InCapsuleDataSize 4096

  # Set the maximum I/O size. Must be a multiple of 4096.
  #MaxIOSize 131072

  # Set the global acceptor lcore ID, lcores are numbered starting at 0.
  AcceptorCore 12

  # Set how often the acceptor polls for incoming connections. The acceptor is also
  # responsible for polling existing connections that have gone idle. 0 means continuously
  # poll. Units in microseconds.
  #AcceptorPollRate  1000
  AcceptorPollRate  0

# Define an NVMf Subsystem.
# Direct controller
[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 13
  Mode Direct
  Listen RDMA 101.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:06:00.0

[Subsystem2]
  NQN nqn.2016-06.io.spdk:cnode2
  Core 14
  Mode Direct
  Listen RDMA 100.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:88:00.0


My NUMA nodes and core:
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47

Thanks,
-Rajinikanth
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Kariuki, John K
Sent: Thursday, November 17, 2016 1:34 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

Raj
What is your Reactor Mask?
Here is an example of settings that I have used in my system to successfully assign work items to different cores.

1)     Set the reactor mask to ReactorMask 0xF000000 in the conf file to use cores 24, 25, 26 and 27 for SPDK.

The ReactorMask restricts work items to only run on certain cores.

2)     Put the acceptor on core 24 in conf file: AcceptorCore 24

3)     Put my subsystems on Core 25 and 26

[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 25

  Mode Direct

  Listen RDMA 192.168.100.8:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:81:00.0



# Multiple subsystems are allowed.

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Core 26

  Mode Direct

  Listen RDMA 192.168.100.9:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:86:00.0



4)     Put the master on core 27 using –p at command line ./nvmf_tgt -c nvmf.conf.coreaffinity -p 27

When the nvmf target starts I get the following output
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000000 -n 4 -m 2048 --master-lcore=27 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 96 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000000
Occupied cpu socket mask is 0x2
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 24 on socket 1
Reactor started on core 25 on socket 1
Reactor started on core 27 on socket 1
Reactor started on core 26 on socket 1
*** RDMA Transport Init ***
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 27 on socket 1
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 25 on socket 1
*** NVMf Target Listening on 192.168.100.8 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c38b80 at 0:81:0.0 to subsystem nqn.2016-06.io.spdk:cnode1
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 26 on socket 1
*** NVMf Target Listening on 192.168.100.9 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c17880 at 0:86:0.0 to subsystem nqn.2016-06.io.spdk:cnode2
Acceptor running on core 24 on socket 1

Hope this helps.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 12:52 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvmf.conf: AcceptorCore Vs Core

Hello,

I have a server with two NUMA nodes.  On each node, configured a NIC.

In nvmf.conf file, based on the node configuration, would like to assign right lcore.

Here is snippet of nvmf.conf:
…
..
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
Mode Direct
Listen RDMA 100.10.10.180:4420
NVMe 0000:06:00.0


[Subsystem2]
NQN nqn.2016-06.io.spdk:cnode2
Core 1
Mode Direct
Listen RDMA 101.10.10.180:4420
NVMe 0000:86:00.0


But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.

Following warning confirms it’s uses lcore 0.

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0
“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”

Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.

It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.

Thanks,



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 54909 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [SPDK] nvmf.conf: AcceptorCore Vs Core
@ 2016-11-18 17:18 Raj Pandurangan
  0 siblings, 0 replies; 8+ messages in thread
From: Raj Pandurangan @ 2016-11-18 17:18 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 11333 bytes --]

Hello Chang,

Yes, I did make sure that both of my RNIC devices’ are in different numa node.

Here is lspci output:
03:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
03:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
82:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
82:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

Also it appears that only one of my RDMA got initialized even though I have two RNICs.  Looking into nvmf.c & transport.c

Total cores available: 4
Reactor started on core 0xc
Reactor started on core 0xd
Reactor started on core 0xe
Reactor started on core 0xf
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 15
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 13
*** NVMf Target Listening on 101.10.10.180 port 4420 ***

Thanks,
-Rajinikanth

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Liu, Changpeng
Sent: Thursday, November 17, 2016 5:45 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

Raj,

SPDK will check RNIC Device/NVMe Device/Running Core ‘s NUMA node is the same or not when started, and give a warning information as you seen in your environment.

Form your information:
NVMe 0000:06:00.0: Socket 0
NVMe 0000:07:00.0: Socket 0
NVMe 0000:88:00.0: Socket 1
NVMe 0000:89:00.0: Socket 1

So can you check your RNIC devices’ numa node?

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Friday, November 18, 2016 7:12 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

Thanks for the details John.  Though it helps, I think I’m still missing something more.

Here is the latest output from nvmf_tgt.

/rajp/spdk# app/nvmf_tgt/nvmf_tgt -c etc/spdk/nvmf.conf -p 15
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000 -n 4 -m 2048 --master-lcore=15 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 48 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000
Occupied cpu socket mask is 0x3
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 0xc
Reactor started on core 0xd
Reactor started on core 0xe
Reactor started on core 0xf
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 15
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 13
*** NVMf Target Listening on 101.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d65b6c0 at 0:6:0.0 to subsystem 0x1acf750
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 14
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NIC. This may result in reduced performance.
*** NVMf Target Listening on 100.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d637700 at 0:88:0.0 to subsystem 0x1ad5900
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.
Acceptor running on core 12


Here is conf file:

# NVMf Target Configuration File
#
# Please write all parameters using ASCII.
# The parameter must be quoted if it includes whitespace.
#
# Configuration syntax:
# Leading whitespace is ignored.
# Lines starting with '#' are comments.
# Lines ending with '\' are concatenated with the next line.
# Bracketed ([]) names define sections

[Global]
  # Users can restrict work items to only run on certain cores by
  #  specifying a ReactorMask.  Default ReactorMask mask is defined as
  #  -c option in the 'ealargs' setting at beginning of file nvmf_tgt.c.
  #ReactorMask 0x00FF
  ReactorMask 0x00F000

  # Tracepoint group mask for spdk trace buffers
  # Default: 0x0 (all tracepoint groups disabled)
  # Set to 0xFFFFFFFFFFFFFFFF to enable all tracepoint groups.
  #TpointGroupMask 0x0

  # syslog facility
  LogFacility "local7"

[Rpc]
  # Defines whether to enable configuration via RPC.
  # Default is disabled.  Note that the RPC interface is not
  # authenticated, so users should be careful about enabling
  # RPC in non-trusted environments.
  Enable No

# Users may change this section to create a different number or size of
#  malloc LUNs.
# This will generate 8 LUNs with a malloc-allocated backend.
# Each LUN will be size 64MB and these will be named
# Malloc0 through Malloc7.  Not all LUNs defined here are necessarily
#  used below.
[Malloc]
  NumberOfLuns 8
  LunSizeInMB 64

# Define NVMf protocol global options
[Nvmf]
  # Set the maximum number of submission and completion queues per session.
  # Setting this to '8', for example, allows for 8 submission and 8 completion queues
  # per session.
  MaxQueuesPerSession 128

  # Set the maximum number of outstanding I/O per queue.
  #MaxQueueDepth 128

  # Set the maximum in-capsule data size. Must be a multiple of 16.
  #InCapsuleDataSize 4096

  # Set the maximum I/O size. Must be a multiple of 4096.
  #MaxIOSize 131072

  # Set the global acceptor lcore ID, lcores are numbered starting at 0.
  AcceptorCore 12

  # Set how often the acceptor polls for incoming connections. The acceptor is also
  # responsible for polling existing connections that have gone idle. 0 means continuously
  # poll. Units in microseconds.
  #AcceptorPollRate  1000
  AcceptorPollRate  0

# Define an NVMf Subsystem.
# Direct controller
[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 13
  Mode Direct
  Listen RDMA 101.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:06:00.0

[Subsystem2]
  NQN nqn.2016-06.io.spdk:cnode2
  Core 14
  Mode Direct
  Listen RDMA 100.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:88:00.0


My NUMA nodes and core:
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47

Thanks,
-Rajinikanth
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Kariuki, John K
Sent: Thursday, November 17, 2016 1:34 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

Raj
What is your Reactor Mask?
Here is an example of settings that I have used in my system to successfully assign work items to different cores.

1)     Set the reactor mask to ReactorMask 0xF000000 in the conf file to use cores 24, 25, 26 and 27 for SPDK.

The ReactorMask restricts work items to only run on certain cores.

2)     Put the acceptor on core 24 in conf file: AcceptorCore 24

3)     Put my subsystems on Core 25 and 26

[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 25

  Mode Direct

  Listen RDMA 192.168.100.8:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:81:00.0



# Multiple subsystems are allowed.

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Core 26

  Mode Direct

  Listen RDMA 192.168.100.9:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:86:00.0



4)     Put the master on core 27 using –p at command line ./nvmf_tgt -c nvmf.conf.coreaffinity -p 27

When the nvmf target starts I get the following output
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000000 -n 4 -m 2048 --master-lcore=27 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 96 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000000
Occupied cpu socket mask is 0x2
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 24 on socket 1
Reactor started on core 25 on socket 1
Reactor started on core 27 on socket 1
Reactor started on core 26 on socket 1
*** RDMA Transport Init ***
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 27 on socket 1
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 25 on socket 1
*** NVMf Target Listening on 192.168.100.8 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c38b80 at 0:81:0.0 to subsystem nqn.2016-06.io.spdk:cnode1
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 26 on socket 1
*** NVMf Target Listening on 192.168.100.9 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c17880 at 0:86:0.0 to subsystem nqn.2016-06.io.spdk:cnode2
Acceptor running on core 24 on socket 1

Hope this helps.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 12:52 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvmf.conf: AcceptorCore Vs Core

Hello,

I have a server with two NUMA nodes.  On each node, configured a NIC.

In nvmf.conf file, based on the node configuration, would like to assign right lcore.

Here is snippet of nvmf.conf:
…
..
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
Mode Direct
Listen RDMA 100.10.10.180:4420
NVMe 0000:06:00.0


[Subsystem2]
NQN nqn.2016-06.io.spdk:cnode2
Core 1
Mode Direct
Listen RDMA 101.10.10.180:4420
NVMe 0000:86:00.0


But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.

Following warning confirms it’s uses lcore 0.

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0
“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”

Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.

It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.

Thanks,



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 60614 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [SPDK] nvmf.conf: AcceptorCore Vs Core
@ 2016-11-18  1:45 Liu, Changpeng
  0 siblings, 0 replies; 8+ messages in thread
From: Liu, Changpeng @ 2016-11-18  1:45 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10133 bytes --]

Raj,

SPDK will check RNIC Device/NVMe Device/Running Core ‘s NUMA node is the same or not when started, and give a warning information as you seen in your environment.

Form your information:
NVMe 0000:06:00.0: Socket 0
NVMe 0000:07:00.0: Socket 0
NVMe 0000:88:00.0: Socket 1
NVMe 0000:89:00.0: Socket 1

So can you check your RNIC devices’ numa node?

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Friday, November 18, 2016 7:12 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

Thanks for the details John.  Though it helps, I think I’m still missing something more.

Here is the latest output from nvmf_tgt.

/rajp/spdk# app/nvmf_tgt/nvmf_tgt -c etc/spdk/nvmf.conf -p 15
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000 -n 4 -m 2048 --master-lcore=15 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 48 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000
Occupied cpu socket mask is 0x3
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 0xc
Reactor started on core 0xd
Reactor started on core 0xe
Reactor started on core 0xf
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 15
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 13
*** NVMf Target Listening on 101.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d65b6c0 at 0:6:0.0 to subsystem 0x1acf750
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 14
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NIC. This may result in reduced performance.
*** NVMf Target Listening on 100.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d637700 at 0:88:0.0 to subsystem 0x1ad5900
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.
Acceptor running on core 12


Here is conf file:

# NVMf Target Configuration File
#
# Please write all parameters using ASCII.
# The parameter must be quoted if it includes whitespace.
#
# Configuration syntax:
# Leading whitespace is ignored.
# Lines starting with '#' are comments.
# Lines ending with '\' are concatenated with the next line.
# Bracketed ([]) names define sections

[Global]
  # Users can restrict work items to only run on certain cores by
  #  specifying a ReactorMask.  Default ReactorMask mask is defined as
  #  -c option in the 'ealargs' setting at beginning of file nvmf_tgt.c.
  #ReactorMask 0x00FF
  ReactorMask 0x00F000

  # Tracepoint group mask for spdk trace buffers
  # Default: 0x0 (all tracepoint groups disabled)
  # Set to 0xFFFFFFFFFFFFFFFF to enable all tracepoint groups.
  #TpointGroupMask 0x0

  # syslog facility
  LogFacility "local7"

[Rpc]
  # Defines whether to enable configuration via RPC.
  # Default is disabled.  Note that the RPC interface is not
  # authenticated, so users should be careful about enabling
  # RPC in non-trusted environments.
  Enable No

# Users may change this section to create a different number or size of
#  malloc LUNs.
# This will generate 8 LUNs with a malloc-allocated backend.
# Each LUN will be size 64MB and these will be named
# Malloc0 through Malloc7.  Not all LUNs defined here are necessarily
#  used below.
[Malloc]
  NumberOfLuns 8
  LunSizeInMB 64

# Define NVMf protocol global options
[Nvmf]
  # Set the maximum number of submission and completion queues per session.
  # Setting this to '8', for example, allows for 8 submission and 8 completion queues
  # per session.
  MaxQueuesPerSession 128

  # Set the maximum number of outstanding I/O per queue.
  #MaxQueueDepth 128

  # Set the maximum in-capsule data size. Must be a multiple of 16.
  #InCapsuleDataSize 4096

  # Set the maximum I/O size. Must be a multiple of 4096.
  #MaxIOSize 131072

  # Set the global acceptor lcore ID, lcores are numbered starting at 0.
  AcceptorCore 12

  # Set how often the acceptor polls for incoming connections. The acceptor is also
  # responsible for polling existing connections that have gone idle. 0 means continuously
  # poll. Units in microseconds.
  #AcceptorPollRate  1000
  AcceptorPollRate  0

# Define an NVMf Subsystem.
# Direct controller
[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 13
  Mode Direct
  Listen RDMA 101.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:06:00.0

[Subsystem2]
  NQN nqn.2016-06.io.spdk:cnode2
  Core 14
  Mode Direct
  Listen RDMA 100.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:88:00.0


My NUMA nodes and core:
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47

Thanks,
-Rajinikanth
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Kariuki, John K
Sent: Thursday, November 17, 2016 1:34 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

Raj
What is your Reactor Mask?
Here is an example of settings that I have used in my system to successfully assign work items to different cores.

1)     Set the reactor mask to ReactorMask 0xF000000 in the conf file to use cores 24, 25, 26 and 27 for SPDK.

The ReactorMask restricts work items to only run on certain cores.

2)     Put the acceptor on core 24 in conf file: AcceptorCore 24

3)     Put my subsystems on Core 25 and 26

[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 25

  Mode Direct

  Listen RDMA 192.168.100.8:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:81:00.0



# Multiple subsystems are allowed.

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Core 26

  Mode Direct

  Listen RDMA 192.168.100.9:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:86:00.0



4)     Put the master on core 27 using –p at command line ./nvmf_tgt -c nvmf.conf.coreaffinity -p 27

When the nvmf target starts I get the following output
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000000 -n 4 -m 2048 --master-lcore=27 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 96 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000000
Occupied cpu socket mask is 0x2
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 24 on socket 1
Reactor started on core 25 on socket 1
Reactor started on core 27 on socket 1
Reactor started on core 26 on socket 1
*** RDMA Transport Init ***
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 27 on socket 1
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 25 on socket 1
*** NVMf Target Listening on 192.168.100.8 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c38b80 at 0:81:0.0 to subsystem nqn.2016-06.io.spdk:cnode1
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 26 on socket 1
*** NVMf Target Listening on 192.168.100.9 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c17880 at 0:86:0.0 to subsystem nqn.2016-06.io.spdk:cnode2
Acceptor running on core 24 on socket 1

Hope this helps.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 12:52 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvmf.conf: AcceptorCore Vs Core

Hello,

I have a server with two NUMA nodes.  On each node, configured a NIC.

In nvmf.conf file, based on the node configuration, would like to assign right lcore.

Here is snippet of nvmf.conf:
…
..
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
Mode Direct
Listen RDMA 100.10.10.180:4420
NVMe 0000:06:00.0


[Subsystem2]
NQN nqn.2016-06.io.spdk:cnode2
Core 1
Mode Direct
Listen RDMA 101.10.10.180:4420
NVMe 0000:86:00.0


But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.

Following warning confirms it’s uses lcore 0.

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0
“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”

Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.

It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.

Thanks,



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 109214 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [SPDK] nvmf.conf: AcceptorCore Vs Core
@ 2016-11-17 23:11 Raj Pandurangan
  0 siblings, 0 replies; 8+ messages in thread
From: Raj Pandurangan @ 2016-11-17 23:11 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9512 bytes --]

Thanks for the details John.  Though it helps, I think I’m still missing something more.

Here is the latest output from nvmf_tgt.

/rajp/spdk# app/nvmf_tgt/nvmf_tgt -c etc/spdk/nvmf.conf -p 15
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000 -n 4 -m 2048 --master-lcore=15 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 48 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000
Occupied cpu socket mask is 0x3
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 0xc
Reactor started on core 0xd
Reactor started on core 0xe
Reactor started on core 0xf
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 15
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 13
*** NVMf Target Listening on 101.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d65b6c0 at 0:6:0.0 to subsystem 0x1acf750
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 14
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NIC. This may result in reduced performance.
*** NVMf Target Listening on 100.10.10.180 port 4420 ***
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 144d:a821 SPDK NVMe
Attaching NVMe device 0x7ff98d637700 at 0:88:0.0 to subsystem 0x1ad5900
Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.
Acceptor running on core 12


Here is conf file:

# NVMf Target Configuration File
#
# Please write all parameters using ASCII.
# The parameter must be quoted if it includes whitespace.
#
# Configuration syntax:
# Leading whitespace is ignored.
# Lines starting with '#' are comments.
# Lines ending with '\' are concatenated with the next line.
# Bracketed ([]) names define sections

[Global]
  # Users can restrict work items to only run on certain cores by
  #  specifying a ReactorMask.  Default ReactorMask mask is defined as
  #  -c option in the 'ealargs' setting at beginning of file nvmf_tgt.c.
  #ReactorMask 0x00FF
  ReactorMask 0x00F000

  # Tracepoint group mask for spdk trace buffers
  # Default: 0x0 (all tracepoint groups disabled)
  # Set to 0xFFFFFFFFFFFFFFFF to enable all tracepoint groups.
  #TpointGroupMask 0x0

  # syslog facility
  LogFacility "local7"

[Rpc]
  # Defines whether to enable configuration via RPC.
  # Default is disabled.  Note that the RPC interface is not
  # authenticated, so users should be careful about enabling
  # RPC in non-trusted environments.
  Enable No

# Users may change this section to create a different number or size of
#  malloc LUNs.
# This will generate 8 LUNs with a malloc-allocated backend.
# Each LUN will be size 64MB and these will be named
# Malloc0 through Malloc7.  Not all LUNs defined here are necessarily
#  used below.
[Malloc]
  NumberOfLuns 8
  LunSizeInMB 64

# Define NVMf protocol global options
[Nvmf]
  # Set the maximum number of submission and completion queues per session.
  # Setting this to '8', for example, allows for 8 submission and 8 completion queues
  # per session.
  MaxQueuesPerSession 128

  # Set the maximum number of outstanding I/O per queue.
  #MaxQueueDepth 128

  # Set the maximum in-capsule data size. Must be a multiple of 16.
  #InCapsuleDataSize 4096

  # Set the maximum I/O size. Must be a multiple of 4096.
  #MaxIOSize 131072

  # Set the global acceptor lcore ID, lcores are numbered starting at 0.
  AcceptorCore 12

  # Set how often the acceptor polls for incoming connections. The acceptor is also
  # responsible for polling existing connections that have gone idle. 0 means continuously
  # poll. Units in microseconds.
  #AcceptorPollRate  1000
  AcceptorPollRate  0

# Define an NVMf Subsystem.
# Direct controller
[Subsystem1]
  NQN nqn.2016-06.io.spdk:cnode1
  Core 13
  Mode Direct
  Listen RDMA 101.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:06:00.0

[Subsystem2]
  NQN nqn.2016-06.io.spdk:cnode2
  Core 14
  Mode Direct
  Listen RDMA 100.10.10.180:4420
#  Host nqn.2016-06.io.spdk:init
  NVMe 0000:88:00.0


My NUMA nodes and core:
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47

Thanks,
-Rajinikanth
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Kariuki, John K
Sent: Thursday, November 17, 2016 1:34 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

Raj
What is your Reactor Mask?
Here is an example of settings that I have used in my system to successfully assign work items to different cores.

1)     Set the reactor mask to ReactorMask 0xF000000 in the conf file to use cores 24, 25, 26 and 27 for SPDK.

The ReactorMask restricts work items to only run on certain cores.

2)     Put the acceptor on core 24 in conf file: AcceptorCore 24

3)     Put my subsystems on Core 25 and 26

[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 25

  Mode Direct

  Listen RDMA 192.168.100.8:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:81:00.0



# Multiple subsystems are allowed.

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Core 26

  Mode Direct

  Listen RDMA 192.168.100.9:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:86:00.0



4)     Put the master on core 27 using –p at command line ./nvmf_tgt -c nvmf.conf.coreaffinity -p 27

When the nvmf target starts I get the following output
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000000 -n 4 -m 2048 --master-lcore=27 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 96 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000000
Occupied cpu socket mask is 0x2
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 24 on socket 1
Reactor started on core 25 on socket 1
Reactor started on core 27 on socket 1
Reactor started on core 26 on socket 1
*** RDMA Transport Init ***
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 27 on socket 1
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 25 on socket 1
*** NVMf Target Listening on 192.168.100.8 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c38b80 at 0:81:0.0 to subsystem nqn.2016-06.io.spdk:cnode1
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 26 on socket 1
*** NVMf Target Listening on 192.168.100.9 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c17880 at 0:86:0.0 to subsystem nqn.2016-06.io.spdk:cnode2
Acceptor running on core 24 on socket 1

Hope this helps.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 12:52 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvmf.conf: AcceptorCore Vs Core

Hello,

I have a server with two NUMA nodes.  On each node, configured a NIC.

In nvmf.conf file, based on the node configuration, would like to assign right lcore.

Here is snippet of nvmf.conf:
…
..
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
Mode Direct
Listen RDMA 100.10.10.180:4420
NVMe 0000:06:00.0


[Subsystem2]
NQN nqn.2016-06.io.spdk:cnode2
Core 1
Mode Direct
Listen RDMA 101.10.10.180:4420
NVMe 0000:86:00.0


But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.

Following warning confirms it’s uses lcore 0.

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0
“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”

Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.

It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.

Thanks,



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 51379 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [SPDK] nvmf.conf: AcceptorCore Vs Core
@ 2016-11-17 21:33 Kariuki, John K
  0 siblings, 0 replies; 8+ messages in thread
From: Kariuki, John K @ 2016-11-17 21:33 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3983 bytes --]

Raj
What is your Reactor Mask?
Here is an example of settings that I have used in my system to successfully assign work items to different cores.

1)     Set the reactor mask to ReactorMask 0xF000000 in the conf file to use cores 24, 25, 26 and 27 for SPDK.

The ReactorMask restricts work items to only run on certain cores.

2)     Put the acceptor on core 24 in conf file: AcceptorCore 24

3)     Put my subsystems on Core 25 and 26

[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 25

  Mode Direct

  Listen RDMA 192.168.100.8:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:81:00.0



# Multiple subsystems are allowed.

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Core 26

  Mode Direct

  Listen RDMA 192.168.100.9:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:86:00.0



4)     Put the master on core 27 using –p at command line ./nvmf_tgt -c nvmf.conf.coreaffinity -p 27

When the nvmf target starts I get the following output
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: nvmf -c f000000 -n 4 -m 2048 --master-lcore=27 --file-prefix=rte0 --proc-type=auto ]
EAL: Detected 96 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
done.
Occupied cpu core mask is 0xf000000
Occupied cpu socket mask is 0x2
Ioat Copy Engine Offload Enabled
Total cores available: 4
Reactor started on core 24 on socket 1
Reactor started on core 25 on socket 1
Reactor started on core 27 on socket 1
Reactor started on core 26 on socket 1
*** RDMA Transport Init ***
*** RDMA Transport Init ***
allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 27 on socket 1
allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 25 on socket 1
*** NVMf Target Listening on 192.168.100.8 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c38b80 at 0:81:0.0 to subsystem nqn.2016-06.io.spdk:cnode1
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 26 on socket 1
*** NVMf Target Listening on 192.168.100.9 port 4420 ***
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:953 SPDK NVMe
Attaching NVMe device 0x7f9256c17880 at 0:86:0.0 to subsystem nqn.2016-06.io.spdk:cnode2
Acceptor running on core 24 on socket 1

Hope this helps.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 12:52 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] nvmf.conf: AcceptorCore Vs Core

Hello,

I have a server with two NUMA nodes.  On each node, configured a NIC.

In nvmf.conf file, based on the node configuration, would like to assign right lcore.

Here is snippet of nvmf.conf:
…
..
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
Mode Direct
Listen RDMA 100.10.10.180:4420
NVMe 0000:06:00.0


[Subsystem2]
NQN nqn.2016-06.io.spdk:cnode2
Core 1
Mode Direct
Listen RDMA 101.10.10.180:4420
NVMe 0000:86:00.0


But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.

Following warning confirms it’s uses lcore 0.

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0
“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”

Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.

It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.

Thanks,



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 26940 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [SPDK] nvmf.conf: AcceptorCore Vs Core
@ 2016-11-17 19:51 Raj Pandurangan
  0 siblings, 0 replies; 8+ messages in thread
From: Raj Pandurangan @ 2016-11-17 19:51 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1125 bytes --]

Hello,

I have a server with two NUMA nodes.  On each node, configured a NIC.

In nvmf.conf file, based on the node configuration, would like to assign right lcore.

Here is snippet of nvmf.conf:
…
..
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
Mode Direct
Listen RDMA 100.10.10.180:4420
NVMe 0000:06:00.0


[Subsystem2]
NQN nqn.2016-06.io.spdk:cnode2
Core 1
Mode Direct
Listen RDMA 101.10.10.180:4420
NVMe 0000:86:00.0


But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.

Following warning confirms it’s uses lcore 0.

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0
“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”

Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.

It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.

Thanks,



[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8896 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-11-23 16:37 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-21 20:06 [SPDK] nvmf.conf: AcceptorCore Vs Core Raj Pandurangan
  -- strict thread matches above, loose matches on Subject: below --
2016-11-23 16:37 Walker, Benjamin
2016-11-22 18:14 Raj Pandurangan
2016-11-18 17:18 Raj Pandurangan
2016-11-18  1:45 Liu, Changpeng
2016-11-17 23:11 Raj Pandurangan
2016-11-17 21:33 Kariuki, John K
2016-11-17 19:51 Raj Pandurangan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.