Hi Tomasz, I configured vpp and I thought it was working because I can ping the initiator machine (192.168.2.10) on my lab private network.
(On target Server):
vpp# show int address
TenGigabitEthernet82/0/0 (up):
192.168.2.20/24
local0 (dn):
vpp#
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 up rx packets 58
rx bytes 5342
tx packets 58
tx bytes 5324
drops 71 (drops were expected because I did a ping to itself on 192.168.2.20)
ip4 49
local0 0 down
vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.1229 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0412 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0733 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0361 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0332 ms
Statistics: 5 sent, 5 received, 0% packet loss
However, something is missing because i started the iscsi_tgt server (below) and executed add_portal_group RPC command to add a portal group and got a bind() error below. Iscsi.conf is attached (portal group and target sections are commented out). The iscsi_tgt binary was linked to the same vpp library that I built.
[root@spdk2 spdk]# ./scripts/rpc.py add_portal_group 1 192.168.2.20:3260
Got JSON-RPC error response
request:
{
"params": {
"portals": [
{
"host": "192.168.2.20",
"port": "3260"
}
],
"tag": 1
},
"jsonrpc": "2.0",
"method": "add_portal_group",
"id": 1
}
response:
{
"message": "Invalid parameters",
"code": -32602
}
[root@spdk2 spdk]# ./app/iscsi_tgt/iscsi_tgt -m 0x0101
Starting SPDK v18.04-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0x0101 --file-prefix=spdk_pid8974 ]
EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.spdk_pid8974_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 645:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x3
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 8 on socket 1
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 318:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
init_grp.c: 107:spdk_iscsi_init_grp_add_initiator: *WARNING*: Please use "ANY" instead of "ALL"
init_grp.c: 108:spdk_iscsi_init_grp_add_initiator: *WARNING*: Converting "ALL" to "ANY" automatically
posix.c: 221:spdk_posix_sock_create: *ERROR*: bind() failed, errno = 99
posix.c: 231:spdk_posix_sock_create: *ERROR*: IP address 192.168.2.20 not available. Verify IP address in config file and make sure setup script is run before starting spdk app.
portal_grp.c: 190:spdk_iscsi_portal_open: *ERROR*: listen error 192.168.2.20.3260
iscsi_rpc.c: 969:spdk_rpc_add_portal_group: *ERROR*: portal_grp_open failed
From: Zawadzki, Tomasz [mailto:tomasz.zawadzki@intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah@us.fujitsu.com>
Cc: Harris, James R <james.r.harris@intel.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; Paul Von-Stamwitz <PVonStamwitz@us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hello Isaac,
Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/
SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.
Suggested flow of starting up applications is:
1. Unbind interfaces from kernel
2. Start VPP and configure the interface via vppctl
3. Start SPDK
4. Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP
Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.
Let me know if you have any questions.
Tomek
From: Isaac Otsiabah [mailto:IOtsiabah@us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki@intel.com>
Cc: Harris, James R <james.r.harris@intel.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; Paul Von-Stamwitz <PVonStamwitz@us.fujitsu.com>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.
For VPP, first, I unbind the nick from the kernel as and start VPP application.
./usertools/dpdk-devbind.py –u 0000:07:00.0
vpp unix {cli-listen /run/vpp/cli.sock}
is not clear because the instructions at the “Configuring iSCSI Traget via RPC method” suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?
Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?
I would appreciate if anyone would help. Thank you.
Isaac