All of lore.kernel.org
 help / color / mirror / Atom feed
From: Isaac Otsiabah <IOtsiabah at us.fujitsu.com>
To: spdk@lists.01.org
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Date: Fri, 13 Apr 2018 11:00:06 -0700	[thread overview]
Message-ID: <BAF7572087063A4BAD2F325FC7533F420573D12AF418@FMSAMAIL.fmsa.local> (raw)
In-Reply-To: 3FF20EF7F07495429158B858FACC0D7F3E016F10@IRSMSX103.ger.corp.intel.com

[-- Attachment #1: Type: text/plain, Size: 7575 bytes --]


Hi Tomasz, I configured vpp and I thought it was working because I can  ping the initiator machine (192.168.2.10) on my lab private network.

(On target Server):
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):
vpp#


vpp# show int
              Name               Idx       State          Counter                                 Count
TenGigabitEthernet82/0/0          1         up       rx packets                    58
                                                                                       rx bytes                    5342
                                                                                       tx packets                    58
                                                                                       tx bytes                    5324
                                                                                       drops                         71      (drops were expected because I did a ping to itself on 192.168.2.20)
                                                                                       ip4                           49
local0                                                    0        down

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.1229 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0412 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0733 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0361 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0332 ms

Statistics: 5 sent, 5 received, 0% packet loss


However, something is missing because i started the iscsi_tgt server (below) and executed add_portal_group RPC command to add a portal group and got a bind() error below. Iscsi.conf is attached (portal group and target sections are commented out). The iscsi_tgt binary was linked to the same vpp library that I built.

[root(a)spdk2 spdk]# ./scripts/rpc.py add_portal_group 1 192.168.2.20:3260
Got JSON-RPC error response
request:
{
  "params": {
    "portals": [
      {
        "host": "192.168.2.20",
        "port": "3260"
      }
    ],
    "tag": 1
  },
  "jsonrpc": "2.0",
  "method": "add_portal_group",
  "id": 1
}
response:
{
  "message": "Invalid parameters",
  "code": -32602
}

[root(a)spdk2 spdk]# ./app/iscsi_tgt/iscsi_tgt -m 0x0101
Starting SPDK v18.04-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0x0101 --file-prefix=spdk_pid8974 ]
EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.spdk_pid8974_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 645:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x3
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 8 on socket 1
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 318:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
init_grp.c: 107:spdk_iscsi_init_grp_add_initiator: *WARNING*: Please use "ANY" instead of "ALL"
init_grp.c: 108:spdk_iscsi_init_grp_add_initiator: *WARNING*: Converting "ALL" to "ANY" automatically

posix.c: 221:spdk_posix_sock_create: *ERROR*: bind() failed, errno = 99
posix.c: 231:spdk_posix_sock_create: *ERROR*: IP address 192.168.2.20 not available. Verify IP address in config file and make sure setup script is run before starting spdk app.
portal_grp.c: 190:spdk_iscsi_portal_open: *ERROR*: listen error 192.168.2.20.3260
iscsi_rpc.c: 969:spdk_rpc_add_portal_group: *ERROR*: portal_grp_open failed







From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 24591 bytes --]

[-- Attachment #3: iscsi-conf.tar --]
[-- Type: application/x-tar, Size: 20480 bytes --]

             reply	other threads:[~2018-04-13 18:00 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-13 18:00 Isaac Otsiabah [this message]
2018-04-13 20:29 [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Isaac Otsiabah
2018-04-13 22:08 Isaac Otsiabah
2018-04-17 18:46 Isaac Otsiabah
2018-04-17 23:29 Isaac Otsiabah
2018-04-18  2:49 Zawadzki, Tomasz
2018-04-19 16:42 Isaac Otsiabah
2018-08-14 17:05 IOtsiabah
2018-08-14 17:15 IOtsiabah
2018-08-14 19:39 IOtsiabah
2018-08-30 22:51 IOtsiabah
2018-08-31  7:16 Zawadzki, Tomasz
2018-09-05 18:10 Edward.Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BAF7572087063A4BAD2F325FC7533F420573D12AF418@FMSAMAIL.fmsa.local \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.