All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors
@ 2018-05-03 19:40 Isaac Otsiabah
  0 siblings, 0 replies; 5+ messages in thread
From: Isaac Otsiabah @ 2018-05-03 19:40 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 34366 bytes --]

Tomasz, we have been testing vpp but we are not getting good result. To make things simpler, we ran the iscsi_tgt server with one nvme device (with no vpp running) and then ran fio on the same node. The result was good.

We then configured vpp, started the iscsi_tgt server, executed the RPC commands to use one nvme device, and ran fio on the same node. The fio results were not good, IOPS were too low.

Please, can you look at it and let us if you see anything incorrect with the setup here? Thank you.

[root(a)spdk1 ~]# cat fio-bdev-jobfile2
[global]
filename=/dev/sdp
ioengine=libaio
thread=1
group_reporting=1
direct=1
ramp_time=30
runtime=60
iodepth=4
readwrite=randrw
rwmixwrite=100
bs=4096

[test]
numjobs=1


[root(a)spdk1 ~]# fio fio-bdev-jobfile2
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=2
fio-2.8
Starting 1 thread
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/312KB/0KB /s] [0/78/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=13972: Thu May  3 11:36:55 2018
  write: io=18468KB, bw=315129B/s, iops=76, runt= 60011msec
    slat (usec): min=6, max=34, avg= 9.17, stdev= 1.58
    clat (msec): min=15, max=50, avg=25.99, stdev= 1.56
     lat (msec): min=15, max=50, avg=26.00, stdev= 1.56
    clat percentiles (usec):
     |  1.00th=[24960],  5.00th=[24960], 10.00th=[24960], 20.00th=[25984],
     | 30.00th=[25984], 40.00th=[25984], 50.00th=[25984], 60.00th=[25984],
     | 70.00th=[25984], 80.00th=[25984], 90.00th=[25984], 95.00th=[25984],
     | 99.00th=[38144], 99.50th=[38144], 99.90th=[38144], 99.95th=[49920],
     | 99.99th=[49920]
    bw (KB  /s): min=    0, max=  319, per=99.20%, avg=304.56, stdev=28.83
    lat (msec) : 20=0.35%, 50=99.59%, 100=0.09%
  cpu          : usr=0.04%, sys=0.06%, ctx=2308, majf=0, minf=0
  IO depths    : 1=0.1%, 2=149.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=4616/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=2

Run status group 0 (all jobs):
  WRITE: io=18468KB, aggrb=307KB/s, minb=307KB/s, maxb=307KB/s, mint=60011msec, maxt=60011msec

Disk stats (read/write):
  sdp: ios=45/6888, merge=0/0, ticks=1168/179238, in_queue=180446, util=99.66%

[root(a)spdk1 ~]# fio fio-bdev-jobfile2
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
fio-2.8
Starting 1 thread
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/608KB/0KB /s] [0/152/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=14016: Thu May  3 11:38:43 2018
  write: io=36860KB, bw=628951B/s, iops=153, runt= 60012msec
    slat (usec): min=3, max=31, avg= 7.45, stdev= 2.10
    clat (msec): min=16, max=51, avg=26.05, stdev= 1.72
     lat (msec): min=17, max=51, avg=26.06, stdev= 1.72
    clat percentiles (usec):
     |  1.00th=[24960],  5.00th=[24960], 10.00th=[24960], 20.00th=[25984],
     | 30.00th=[25984], 40.00th=[25984], 50.00th=[25984], 60.00th=[25984],
     | 70.00th=[25984], 80.00th=[25984], 90.00th=[25984], 95.00th=[25984],
     | 99.00th=[38144], 99.50th=[38144], 99.90th=[49920], 99.95th=[49920],
     | 99.99th=[50944]
    bw (KB  /s): min=    0, max=  628, per=99.08%, avg=608.34, stdev=58.15
    lat (msec) : 20=0.13%, 50=99.77%, 100=0.13%
  cpu          : usr=0.06%, sys=0.09%, ctx=2303, majf=0, minf=0
  IO depths    : 1=0.1%, 2=0.1%, 4=149.6%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=9212/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: io=36860KB, aggrb=614KB/s, minb=614KB/s, maxb=614KB/s, mint=60012msec, maxt=60012msec

Disk stats (read/write):
  sdp: ios=45/13776, merge=0/0, ticks=1155/358548, in_queue=359751, util=99.68%

[root(a)spdk1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether 90:1b:0e:25:62:62 brd ff:ff:ff:ff:ff:ff
4: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 90:1b:0e:25:62:63 brd ff:ff:ff:ff:ff:ff
5: enp130s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 90:1b:0e:30:90:94 brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.10/24 brd 192.168.3.255 scope global enp130s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::921b:eff:fe30:9094/64 scope link
       valid_lft forever preferred_lft forever
6: ib0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 4092 qdisc pfifo_fast state DOWN qlen 256
    link/infiniband 80:00:02:08:fe:80:00:00:00:00:00:00:f4:52:14:03:00:88:e7:21 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
7: ib1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 4092 qdisc pfifo_fast state DOWN qlen 256
    link/infiniband 80:00:02:09:fe:80:00:00:00:00:00:00:f4:52:14:03:00:88:e7:22 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether ea:52:7f:d9:9c:21 brd ff:ff:ff:ff:ff:ff
9: ovs-br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether 90:1b:0e:25:62:62 brd ff:ff:ff:ff:ff:ff
    inet 133.164.98.222/24 brd 133.164.98.255 scope global ovs-br0
       valid_lft forever preferred_lft forever
    inet6 fe80::8005:6dff:feb8:9d43/64 scope link
       valid_lft forever preferred_lft forever
10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:cd:3d:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr0
       valid_lft forever preferred_lft forever
11: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:cd:3d:00 brd ff:ff:ff:ff:ff:ff
15: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 1e:73:e0:ec:88:d8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.2/24 scope global tap0
       valid_lft forever preferred_lft forever
    inet6 fe80::1c73:e0ff:feec:88d8/64 scope link
       valid_lft forever preferred_lft forever
[root(a)spdk1 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       rx packets                100125
                                                     rx bytes                92676313
                                                     tx packets                123127
                                                     tx bytes                44517310
                                                     drops                       9659
                                                     punts                         12
                                                     ip4                        99955
vpp# show interface address
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
  10.0.0.1/24
vpp#

Isaac/Edward
From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Friday, April 27, 2018 7:47 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Cc: Edward Yang <eyang(a)us.fujitsu.com>
Subject: RE: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors

Hello Isaac,

Described network topology, SPDK and VPP setup looks correct to me.

When error you are receiving is seen only on management messages like login this should be fine. It is generated when iscsi_tgt server receives TCP Reset during read. It happens on both posix and vpp net implementations on SPDK. What happens is iscsiadm sending RST instead of closing the connection after sending iSCSI login command to quickly end the TCP connection.

A quick way to verify that, is to use "sudo tcpdump -i interface_name -nn 'tcp[13] & 4!=0'" on machine when iscsiadm login is used. This will display all packets with TCP RST.

If this message occured during normal data transfer, then it might signify some issue with either connection or setup.

Probably the log level of this message should be lowered to NOTICE and text enhanced.

Thanks,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Thursday, April 26, 2018 8:22 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Edward Yang <eyang(a)us.fujitsu.com<mailto:eyang(a)us.fujitsu.com>>
Subject: RE: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors


Tomasz, Edward (my co-worker) and i configured vpp on the same network topology as in my previous message. From the server, after starting the iscsi_tgt server, we executed RPC commands to  set up the portal, add initiator, configure nvme bdev, construct_lvol_bdev and constructed a target node.

From the client, we executed "iscsiadm -m discovery -t sendtargets -p 192.168.2.10" command and after login, we see the new iscsi /dev/sdp and i can read/write to it.


However, the server error I sent on Monday still appears (in red). Any idea why we still get the error message (conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer) from iscsi_tgt server? is our vpp configuration  correct?  Thank you.


(On Server-192.168.2.10):
[root(a)spdk1 spdk]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show int
              Name               Idx                           State   Counter                    Count
TenGigabitEthernet82/0/0          1         up         rx packets                1863376
                                                                                         rx bytes             2744564878
                                                                                         tx packets               1874613
                                                                                         tx bytes               139636606
                                                                                         drops                               4272
                                                                                         punts                                     4
                                                                                         ip4                             1863210
                                                                                         ip6                                        10
local0                            0                                down

vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.10/24
local0 (dn):
vpp#

[root(a)spdk1 spdk]# ./app/iscsi_tgt/iscsi_tgt -m 0x0101
Starting SPDK v18.04-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0x0101 --file-prefix=spdk_pid14476 ]
EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.spdk_pid14476_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 650:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x3
reactor.c: 434:_spdk_reactor_run: *NOTICE*: Reactor started on core 8 on socket 1
reactor.c: 434:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL:   using IOMMU type 1 (Type 1)
lvol.c:1051:_spdk_lvs_verify_lvol_name: *ERROR*: lvol with name lbd_0 already exists
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target0 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000007, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off

(On Client-192.168.2.10):
[root(a)spdk2 ~]#iscsiadm -m discovery -t sendtargets -p 192.168.2.10
192.168.2.10:3260,1 iqn.2016-06.io.spdk:Target0
[root(a)spdk2 ~]#iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 192.168.2.10,3260] (multiple)
Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 192.168.2.10,3260] successful.
[root(a)spdk2 ~]#
[root(a)spdk2 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 Apr 25 15:03 /dev/sda
brw-rw---- 1 root disk 8,   1 Apr 25 15:03 /dev/sda1
brw-rw---- 1 root disk 8,   2 Apr 25 15:03 /dev/sda2
brw-rw---- 1 root disk 8,   3 Apr 25 15:03 /dev/sda3
brw-rw---- 1 root disk 8,  16 Apr 25 15:03 /dev/sdb
brw-rw---- 1 root disk 8,  32 Apr 25 15:03 /dev/sdc
brw-rw---- 1 root disk 8,  48 Apr 25 15:03 /dev/sdd
brw-rw---- 1 root disk 8,  49 Apr 25 15:03 /dev/sdd1
brw-rw---- 1 root disk 8,  64 Apr 25 15:03 /dev/sde
brw-rw---- 1 root disk 8,  80 Apr 25 15:03 /dev/sdf
brw-rw---- 1 root disk 8,  96 Apr 25 15:03 /dev/sdg
brw-rw---- 1 root disk 8, 112 Apr 25 15:03 /dev/sdh
brw-rw---- 1 root disk 8, 128 Apr 25 15:03 /dev/sdi
brw-rw---- 1 root disk 8, 144 Apr 25 15:03 /dev/sdj
brw-rw---- 1 root disk 8, 160 Apr 25 15:03 /dev/sdk
brw-rw---- 1 root disk 8, 176 Apr 25 15:03 /dev/sdl
brw-rw---- 1 root disk 8, 192 Apr 25 15:03 /dev/sdm
brw-rw---- 1 root disk 8, 208 Apr 25 15:03 /dev/sdn
brw-rw---- 1 root disk 8, 224 Apr 25 15:03 /dev/sdo
brw-rw---- 1 root disk 8, 240 Apr 25 19:17 /dev/sdp

Isaac
From: Isaac Otsiabah
Sent: Monday, April 23, 2018 4:43 PM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors


Tomasz, Edward and I have been working on this again today. We did not create any veth nor tap device. Instead in vppctl we set the 10G interface to 192.168.2.10/10 and then upped the interface as shown below.

(On Server):
root(a)spdk1 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.10/24
local0 (dn):

The client ip address is 192.168.2.20

The we started the iscsi_tgt server and executed the commands below.

python /root/spdk_vpp/spdk/scripts/rpc.py add_portal_group 1 192.168.2.10:3260
python /root/spdk_vpp/spdk/scripts/rpc.py add_initiator_group 2 ANY 192.168.2.20/24
python /root/spdk_vpp/spdk/scripts/rpc.py construct_malloc_bdev -b MyBdev 64 512
python /root/spdk_vpp/spdk/scripts/rpc.py construct_target_node Target3 Target3_alias MyBdev:0 1:2 64

We got these errors from iscsi_tgt server (as shown below).

conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 2): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
iscsi.c:2601:spdk_iscsi_op_logout: *NOTICE*: Logout from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 3): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000002, TSIH=4, CID=0, HeaderDigest=off, DataDigest=off

(On client):
However, we can see the iscsi devices from the client machine. Any suggestion on how to get rid of these errors? Were the above steps correct?

Isaac/Edward

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Tuesday, April 17, 2018 7:49 PM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Thank you for all the detailed descriptions, it really helps to understand the steps you took.

I see that you are working with two hosts and using network cards (TenGigabitEthernet82). Actually all you did in "STEP1" is enough for iscsi_tgt to listen on 192.168.2.20. "STEP2" is not required. Only thing left to do on target is to configure portal/initiator_group/target_node, described here<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>.

"Example: Tap interfaces on a single host" is describing situation when someone would like to try out VPP without using another host and "real" network cards. Same goes for veth interfaces used in scripts for per-patch tests - they are done on single host.
Thinking back, there should be second example with exact setup that you have - two hosts using network cards. I will look into it.

Thanks for all the feedback !

PS. Patch with VPP implementation is merged on master as of today, no need to cherry-pick anymore.

Regards,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 18, 2018 1:29 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, after executing the commands from the paragraph Example: Tap interfaces on a single host in this link (http://www.spdk.io/doc/iscsi.html#vpp) document, when I ping vpp from the target Server i get a respond.


[root(a)spdk2 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# tap connect tap0
tapcli-0
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2        down      drops                          8
vpp# set interface state tapcli-0 up
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       drops                          8
vpp# set interface ip address tapcli-0 192.168.2.20/24
vpp# show int addr
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
 192.168.2.20/24

ip addr add 192.168.2.202/24 dev tap0
ip link set tap0 up

/* pinging vpp from target Server */
[root(a)spdk2 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.116 ms

My question is, what about the 10G interface as shown above "TenGigabitEthernet82/0/0          1        down"? The document does not say anything about it. Shouldn't I set ip address for it and up it?



Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 87078 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [SPDK] SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors
@ 2018-05-07 23:58 Zawadzki, Tomasz
  0 siblings, 0 replies; 5+ messages in thread
From: Zawadzki, Tomasz @ 2018-05-07 23:58 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 35169 bytes --]

Hello Isaac,

I see the results were from kernel tap interfaces. This type of interface is intended only for functional test or as example.

Repeating those tests with setup from previous emails in the thread (2 hosts, each with 10Gb network card), should show much better results.
Can you please let us know what the results from that setup are ?

Thanks,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Thursday, May 3, 2018 12:40 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Cc: Edward Yang <eyang(a)us.fujitsu.com>
Subject: RE: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors

Tomasz, we have been testing vpp but we are not getting good result. To make things simpler, we ran the iscsi_tgt server with one nvme device (with no vpp running) and then ran fio on the same node. The result was good.

We then configured vpp, started the iscsi_tgt server, executed the RPC commands to use one nvme device, and ran fio on the same node. The fio results were not good, IOPS were too low.

Please, can you look at it and let us if you see anything incorrect with the setup here? Thank you.

[root(a)spdk1 ~]# cat fio-bdev-jobfile2
[global]
filename=/dev/sdp
ioengine=libaio
thread=1
group_reporting=1
direct=1
ramp_time=30
runtime=60
iodepth=4
readwrite=randrw
rwmixwrite=100
bs=4096

[test]
numjobs=1


[root(a)spdk1 ~]# fio fio-bdev-jobfile2
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=2
fio-2.8
Starting 1 thread
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/312KB/0KB /s] [0/78/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=13972: Thu May  3 11:36:55 2018
  write: io=18468KB, bw=315129B/s, iops=76, runt= 60011msec
    slat (usec): min=6, max=34, avg= 9.17, stdev= 1.58
    clat (msec): min=15, max=50, avg=25.99, stdev= 1.56
     lat (msec): min=15, max=50, avg=26.00, stdev= 1.56
    clat percentiles (usec):
     |  1.00th=[24960],  5.00th=[24960], 10.00th=[24960], 20.00th=[25984],
     | 30.00th=[25984], 40.00th=[25984], 50.00th=[25984], 60.00th=[25984],
     | 70.00th=[25984], 80.00th=[25984], 90.00th=[25984], 95.00th=[25984],
     | 99.00th=[38144], 99.50th=[38144], 99.90th=[38144], 99.95th=[49920],
     | 99.99th=[49920]
    bw (KB  /s): min=    0, max=  319, per=99.20%, avg=304.56, stdev=28.83
    lat (msec) : 20=0.35%, 50=99.59%, 100=0.09%
  cpu          : usr=0.04%, sys=0.06%, ctx=2308, majf=0, minf=0
  IO depths    : 1=0.1%, 2=149.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=4616/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=2

Run status group 0 (all jobs):
  WRITE: io=18468KB, aggrb=307KB/s, minb=307KB/s, maxb=307KB/s, mint=60011msec, maxt=60011msec

Disk stats (read/write):
  sdp: ios=45/6888, merge=0/0, ticks=1168/179238, in_queue=180446, util=99.66%

[root(a)spdk1 ~]# fio fio-bdev-jobfile2
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
fio-2.8
Starting 1 thread
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/608KB/0KB /s] [0/152/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=14016: Thu May  3 11:38:43 2018
  write: io=36860KB, bw=628951B/s, iops=153, runt= 60012msec
    slat (usec): min=3, max=31, avg= 7.45, stdev= 2.10
    clat (msec): min=16, max=51, avg=26.05, stdev= 1.72
     lat (msec): min=17, max=51, avg=26.06, stdev= 1.72
    clat percentiles (usec):
     |  1.00th=[24960],  5.00th=[24960], 10.00th=[24960], 20.00th=[25984],
     | 30.00th=[25984], 40.00th=[25984], 50.00th=[25984], 60.00th=[25984],
     | 70.00th=[25984], 80.00th=[25984], 90.00th=[25984], 95.00th=[25984],
     | 99.00th=[38144], 99.50th=[38144], 99.90th=[49920], 99.95th=[49920],
     | 99.99th=[50944]
    bw (KB  /s): min=    0, max=  628, per=99.08%, avg=608.34, stdev=58.15
    lat (msec) : 20=0.13%, 50=99.77%, 100=0.13%
  cpu          : usr=0.06%, sys=0.09%, ctx=2303, majf=0, minf=0
  IO depths    : 1=0.1%, 2=0.1%, 4=149.6%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=9212/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: io=36860KB, aggrb=614KB/s, minb=614KB/s, maxb=614KB/s, mint=60012msec, maxt=60012msec

Disk stats (read/write):
  sdp: ios=45/13776, merge=0/0, ticks=1155/358548, in_queue=359751, util=99.68%

[root(a)spdk1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether 90:1b:0e:25:62:62 brd ff:ff:ff:ff:ff:ff
4: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 90:1b:0e:25:62:63 brd ff:ff:ff:ff:ff:ff
5: enp130s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 90:1b:0e:30:90:94 brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.10/24 brd 192.168.3.255 scope global enp130s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::921b:eff:fe30:9094/64 scope link
       valid_lft forever preferred_lft forever
6: ib0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 4092 qdisc pfifo_fast state DOWN qlen 256
    link/infiniband 80:00:02:08:fe:80:00:00:00:00:00:00:f4:52:14:03:00:88:e7:21 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
7: ib1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 4092 qdisc pfifo_fast state DOWN qlen 256
    link/infiniband 80:00:02:09:fe:80:00:00:00:00:00:00:f4:52:14:03:00:88:e7:22 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether ea:52:7f:d9:9c:21 brd ff:ff:ff:ff:ff:ff
9: ovs-br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether 90:1b:0e:25:62:62 brd ff:ff:ff:ff:ff:ff
    inet 133.164.98.222/24 brd 133.164.98.255 scope global ovs-br0
       valid_lft forever preferred_lft forever
    inet6 fe80::8005:6dff:feb8:9d43/64 scope link
       valid_lft forever preferred_lft forever
10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:cd:3d:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr0
       valid_lft forever preferred_lft forever
11: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:cd:3d:00 brd ff:ff:ff:ff:ff:ff
15: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 1e:73:e0:ec:88:d8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.2/24 scope global tap0
       valid_lft forever preferred_lft forever
    inet6 fe80::1c73:e0ff:feec:88d8/64 scope link
       valid_lft forever preferred_lft forever
[root(a)spdk1 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       rx packets                100125
                                                     rx bytes                92676313
                                                     tx packets                123127
                                                     tx bytes                44517310
                                                     drops                       9659
                                                     punts                         12
                                                     ip4                        99955
vpp# show interface address
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
  10.0.0.1/24
vpp#

Isaac/Edward
From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Friday, April 27, 2018 7:47 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Edward Yang <eyang(a)us.fujitsu.com<mailto:eyang(a)us.fujitsu.com>>
Subject: RE: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors

Hello Isaac,

Described network topology, SPDK and VPP setup looks correct to me.

When error you are receiving is seen only on management messages like login this should be fine. It is generated when iscsi_tgt server receives TCP Reset during read. It happens on both posix and vpp net implementations on SPDK. What happens is iscsiadm sending RST instead of closing the connection after sending iSCSI login command to quickly end the TCP connection.

A quick way to verify that, is to use "sudo tcpdump -i interface_name -nn 'tcp[13] & 4!=0'" on machine when iscsiadm login is used. This will display all packets with TCP RST.

If this message occured during normal data transfer, then it might signify some issue with either connection or setup.

Probably the log level of this message should be lowered to NOTICE and text enhanced.

Thanks,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Thursday, April 26, 2018 8:22 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Edward Yang <eyang(a)us.fujitsu.com<mailto:eyang(a)us.fujitsu.com>>
Subject: RE: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors


Tomasz, Edward (my co-worker) and i configured vpp on the same network topology as in my previous message. From the server, after starting the iscsi_tgt server, we executed RPC commands to  set up the portal, add initiator, configure nvme bdev, construct_lvol_bdev and constructed a target node.

From the client, we executed "iscsiadm -m discovery -t sendtargets -p 192.168.2.10" command and after login, we see the new iscsi /dev/sdp and i can read/write to it.


However, the server error I sent on Monday still appears (in red). Any idea why we still get the error message (conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer) from iscsi_tgt server? is our vpp configuration  correct?  Thank you.


(On Server-192.168.2.10):
[root(a)spdk1 spdk]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show int
              Name               Idx                           State   Counter                    Count
TenGigabitEthernet82/0/0          1         up         rx packets                1863376
                                                                                         rx bytes             2744564878
                                                                                         tx packets               1874613
                                                                                         tx bytes               139636606
                                                                                         drops                               4272
                                                                                         punts                                     4
                                                                                         ip4                             1863210
                                                                                         ip6                                        10
local0                            0                                down

vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.10/24
local0 (dn):
vpp#

[root(a)spdk1 spdk]# ./app/iscsi_tgt/iscsi_tgt -m 0x0101
Starting SPDK v18.04-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0x0101 --file-prefix=spdk_pid14476 ]
EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.spdk_pid14476_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 650:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x3
reactor.c: 434:_spdk_reactor_run: *NOTICE*: Reactor started on core 8 on socket 1
reactor.c: 434:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL:   using IOMMU type 1 (Type 1)
lvol.c:1051:_spdk_lvs_verify_lvol_name: *ERROR*: lvol with name lbd_0 already exists
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target0 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000007, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off

(On Client-192.168.2.10):
[root(a)spdk2 ~]#iscsiadm -m discovery -t sendtargets -p 192.168.2.10
192.168.2.10:3260,1 iqn.2016-06.io.spdk:Target0
[root(a)spdk2 ~]#iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 192.168.2.10,3260] (multiple)
Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 192.168.2.10,3260] successful.
[root(a)spdk2 ~]#
[root(a)spdk2 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 Apr 25 15:03 /dev/sda
brw-rw---- 1 root disk 8,   1 Apr 25 15:03 /dev/sda1
brw-rw---- 1 root disk 8,   2 Apr 25 15:03 /dev/sda2
brw-rw---- 1 root disk 8,   3 Apr 25 15:03 /dev/sda3
brw-rw---- 1 root disk 8,  16 Apr 25 15:03 /dev/sdb
brw-rw---- 1 root disk 8,  32 Apr 25 15:03 /dev/sdc
brw-rw---- 1 root disk 8,  48 Apr 25 15:03 /dev/sdd
brw-rw---- 1 root disk 8,  49 Apr 25 15:03 /dev/sdd1
brw-rw---- 1 root disk 8,  64 Apr 25 15:03 /dev/sde
brw-rw---- 1 root disk 8,  80 Apr 25 15:03 /dev/sdf
brw-rw---- 1 root disk 8,  96 Apr 25 15:03 /dev/sdg
brw-rw---- 1 root disk 8, 112 Apr 25 15:03 /dev/sdh
brw-rw---- 1 root disk 8, 128 Apr 25 15:03 /dev/sdi
brw-rw---- 1 root disk 8, 144 Apr 25 15:03 /dev/sdj
brw-rw---- 1 root disk 8, 160 Apr 25 15:03 /dev/sdk
brw-rw---- 1 root disk 8, 176 Apr 25 15:03 /dev/sdl
brw-rw---- 1 root disk 8, 192 Apr 25 15:03 /dev/sdm
brw-rw---- 1 root disk 8, 208 Apr 25 15:03 /dev/sdn
brw-rw---- 1 root disk 8, 224 Apr 25 15:03 /dev/sdo
brw-rw---- 1 root disk 8, 240 Apr 25 19:17 /dev/sdp

Isaac
From: Isaac Otsiabah
Sent: Monday, April 23, 2018 4:43 PM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors


Tomasz, Edward and I have been working on this again today. We did not create any veth nor tap device. Instead in vppctl we set the 10G interface to 192.168.2.10/10 and then upped the interface as shown below.

(On Server):
root(a)spdk1 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.10/24
local0 (dn):

The client ip address is 192.168.2.20

The we started the iscsi_tgt server and executed the commands below.

python /root/spdk_vpp/spdk/scripts/rpc.py add_portal_group 1 192.168.2.10:3260
python /root/spdk_vpp/spdk/scripts/rpc.py add_initiator_group 2 ANY 192.168.2.20/24
python /root/spdk_vpp/spdk/scripts/rpc.py construct_malloc_bdev -b MyBdev 64 512
python /root/spdk_vpp/spdk/scripts/rpc.py construct_target_node Target3 Target3_alias MyBdev:0 1:2 64

We got these errors from iscsi_tgt server (as shown below).

conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 2): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
iscsi.c:2601:spdk_iscsi_op_logout: *NOTICE*: Logout from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 3): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000002, TSIH=4, CID=0, HeaderDigest=off, DataDigest=off

(On client):
However, we can see the iscsi devices from the client machine. Any suggestion on how to get rid of these errors? Were the above steps correct?

Isaac/Edward

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Tuesday, April 17, 2018 7:49 PM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Thank you for all the detailed descriptions, it really helps to understand the steps you took.

I see that you are working with two hosts and using network cards (TenGigabitEthernet82). Actually all you did in "STEP1" is enough for iscsi_tgt to listen on 192.168.2.20. "STEP2" is not required. Only thing left to do on target is to configure portal/initiator_group/target_node, described here<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>.

"Example: Tap interfaces on a single host" is describing situation when someone would like to try out VPP without using another host and "real" network cards. Same goes for veth interfaces used in scripts for per-patch tests - they are done on single host.
Thinking back, there should be second example with exact setup that you have - two hosts using network cards. I will look into it.

Thanks for all the feedback !

PS. Patch with VPP implementation is merged on master as of today, no need to cherry-pick anymore.

Regards,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 18, 2018 1:29 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, after executing the commands from the paragraph Example: Tap interfaces on a single host in this link (http://www.spdk.io/doc/iscsi.html#vpp) document, when I ping vpp from the target Server i get a respond.


[root(a)spdk2 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# tap connect tap0
tapcli-0
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2        down      drops                          8
vpp# set interface state tapcli-0 up
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       drops                          8
vpp# set interface ip address tapcli-0 192.168.2.20/24
vpp# show int addr
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
 192.168.2.20/24

ip addr add 192.168.2.202/24 dev tap0
ip link set tap0 up

/* pinging vpp from target Server */
[root(a)spdk2 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.116 ms

My question is, what about the 10G interface as shown above "TenGigabitEthernet82/0/0          1        down"? The document does not say anything about it. Shouldn't I set ip address for it and up it?



Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 91460 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [SPDK] SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors
@ 2018-04-27 14:46 Zawadzki, Tomasz
  0 siblings, 0 replies; 5+ messages in thread
From: Zawadzki, Tomasz @ 2018-04-27 14:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 25955 bytes --]

Hello Isaac,

Described network topology, SPDK and VPP setup looks correct to me.

When error you are receiving is seen only on management messages like login this should be fine. It is generated when iscsi_tgt server receives TCP Reset during read. It happens on both posix and vpp net implementations on SPDK. What happens is iscsiadm sending RST instead of closing the connection after sending iSCSI login command to quickly end the TCP connection.

A quick way to verify that, is to use "sudo tcpdump -i interface_name -nn 'tcp[13] & 4!=0'" on machine when iscsiadm login is used. This will display all packets with TCP RST.

If this message occured during normal data transfer, then it might signify some issue with either connection or setup.

Probably the log level of this message should be lowered to NOTICE and text enhanced.

Thanks,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Thursday, April 26, 2018 8:22 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Cc: Edward Yang <eyang(a)us.fujitsu.com>
Subject: RE: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors


Tomasz, Edward (my co-worker) and i configured vpp on the same network topology as in my previous message. From the server, after starting the iscsi_tgt server, we executed RPC commands to  set up the portal, add initiator, configure nvme bdev, construct_lvol_bdev and constructed a target node.

From the client, we executed "iscsiadm -m discovery -t sendtargets -p 192.168.2.10" command and after login, we see the new iscsi /dev/sdp and i can read/write to it.


However, the server error I sent on Monday still appears (in red). Any idea why we still get the error message (conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer) from iscsi_tgt server? is our vpp configuration  correct?  Thank you.


(On Server-192.168.2.10):
[root(a)spdk1 spdk]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show int
              Name               Idx                           State   Counter                    Count
TenGigabitEthernet82/0/0          1         up         rx packets                1863376
                                                                                         rx bytes             2744564878
                                                                                         tx packets               1874613
                                                                                         tx bytes               139636606
                                                                                         drops                               4272
                                                                                         punts                                     4
                                                                                         ip4                             1863210
                                                                                         ip6                                        10
local0                            0                                down

vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.10/24
local0 (dn):
vpp#

[root(a)spdk1 spdk]# ./app/iscsi_tgt/iscsi_tgt -m 0x0101
Starting SPDK v18.04-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0x0101 --file-prefix=spdk_pid14476 ]
EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.spdk_pid14476_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 650:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x3
reactor.c: 434:_spdk_reactor_run: *NOTICE*: Reactor started on core 8 on socket 1
reactor.c: 434:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL:   using IOMMU type 1 (Type 1)
lvol.c:1051:_spdk_lvs_verify_lvol_name: *ERROR*: lvol with name lbd_0 already exists
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target0 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000007, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off

(On Client-192.168.2.10):
[root(a)spdk2 ~]#iscsiadm -m discovery -t sendtargets -p 192.168.2.10
192.168.2.10:3260,1 iqn.2016-06.io.spdk:Target0
[root(a)spdk2 ~]#iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 192.168.2.10,3260] (multiple)
Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 192.168.2.10,3260] successful.
[root(a)spdk2 ~]#
[root(a)spdk2 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 Apr 25 15:03 /dev/sda
brw-rw---- 1 root disk 8,   1 Apr 25 15:03 /dev/sda1
brw-rw---- 1 root disk 8,   2 Apr 25 15:03 /dev/sda2
brw-rw---- 1 root disk 8,   3 Apr 25 15:03 /dev/sda3
brw-rw---- 1 root disk 8,  16 Apr 25 15:03 /dev/sdb
brw-rw---- 1 root disk 8,  32 Apr 25 15:03 /dev/sdc
brw-rw---- 1 root disk 8,  48 Apr 25 15:03 /dev/sdd
brw-rw---- 1 root disk 8,  49 Apr 25 15:03 /dev/sdd1
brw-rw---- 1 root disk 8,  64 Apr 25 15:03 /dev/sde
brw-rw---- 1 root disk 8,  80 Apr 25 15:03 /dev/sdf
brw-rw---- 1 root disk 8,  96 Apr 25 15:03 /dev/sdg
brw-rw---- 1 root disk 8, 112 Apr 25 15:03 /dev/sdh
brw-rw---- 1 root disk 8, 128 Apr 25 15:03 /dev/sdi
brw-rw---- 1 root disk 8, 144 Apr 25 15:03 /dev/sdj
brw-rw---- 1 root disk 8, 160 Apr 25 15:03 /dev/sdk
brw-rw---- 1 root disk 8, 176 Apr 25 15:03 /dev/sdl
brw-rw---- 1 root disk 8, 192 Apr 25 15:03 /dev/sdm
brw-rw---- 1 root disk 8, 208 Apr 25 15:03 /dev/sdn
brw-rw---- 1 root disk 8, 224 Apr 25 15:03 /dev/sdo
brw-rw---- 1 root disk 8, 240 Apr 25 19:17 /dev/sdp

Isaac
From: Isaac Otsiabah
Sent: Monday, April 23, 2018 4:43 PM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors


Tomasz, Edward and I have been working on this again today. We did not create any veth nor tap device. Instead in vppctl we set the 10G interface to 192.168.2.10/10 and then upped the interface as shown below.

(On Server):
root(a)spdk1 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.10/24
local0 (dn):

The client ip address is 192.168.2.20

The we started the iscsi_tgt server and executed the commands below.

python /root/spdk_vpp/spdk/scripts/rpc.py add_portal_group 1 192.168.2.10:3260
python /root/spdk_vpp/spdk/scripts/rpc.py add_initiator_group 2 ANY 192.168.2.20/24
python /root/spdk_vpp/spdk/scripts/rpc.py construct_malloc_bdev -b MyBdev 64 512
python /root/spdk_vpp/spdk/scripts/rpc.py construct_target_node Target3 Target3_alias MyBdev:0 1:2 64

We got these errors from iscsi_tgt server (as shown below).

conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 2): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
iscsi.c:2601:spdk_iscsi_op_logout: *NOTICE*: Logout from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 3): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000002, TSIH=4, CID=0, HeaderDigest=off, DataDigest=off

(On client):
However, we can see the iscsi devices from the client machine. Any suggestion on how to get rid of these errors? Were the above steps correct?

Isaac/Edward

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Tuesday, April 17, 2018 7:49 PM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Thank you for all the detailed descriptions, it really helps to understand the steps you took.

I see that you are working with two hosts and using network cards (TenGigabitEthernet82). Actually all you did in "STEP1" is enough for iscsi_tgt to listen on 192.168.2.20. "STEP2" is not required. Only thing left to do on target is to configure portal/initiator_group/target_node, described here<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>.

"Example: Tap interfaces on a single host" is describing situation when someone would like to try out VPP without using another host and "real" network cards. Same goes for veth interfaces used in scripts for per-patch tests - they are done on single host.
Thinking back, there should be second example with exact setup that you have - two hosts using network cards. I will look into it.

Thanks for all the feedback !

PS. Patch with VPP implementation is merged on master as of today, no need to cherry-pick anymore.

Regards,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 18, 2018 1:29 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, after executing the commands from the paragraph Example: Tap interfaces on a single host in this link (http://www.spdk.io/doc/iscsi.html#vpp) document, when I ping vpp from the target Server i get a respond.


[root(a)spdk2 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# tap connect tap0
tapcli-0
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2        down      drops                          8
vpp# set interface state tapcli-0 up
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       drops                          8
vpp# set interface ip address tapcli-0 192.168.2.20/24
vpp# show int addr
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
 192.168.2.20/24

ip addr add 192.168.2.202/24 dev tap0
ip link set tap0 up

/* pinging vpp from target Server */
[root(a)spdk2 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.116 ms

My question is, what about the 10G interface as shown above "TenGigabitEthernet82/0/0          1        down"? The document does not say anything about it. Shouldn't I set ip address for it and up it?



Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 70281 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [SPDK] SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors
@ 2018-04-26 18:22 Isaac Otsiabah
  0 siblings, 0 replies; 5+ messages in thread
From: Isaac Otsiabah @ 2018-04-26 18:22 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 24687 bytes --]


Tomasz, Edward (my co-worker) and i configured vpp on the same network topology as in my previous message. From the server, after starting the iscsi_tgt server, we executed RPC commands to  set up the portal, add initiator, configure nvme bdev, construct_lvol_bdev and constructed a target node.

From the client, we executed "iscsiadm -m discovery -t sendtargets -p 192.168.2.10" command and after login, we see the new iscsi /dev/sdp and i can read/write to it.


However, the server error I sent on Monday still appears (in red). Any idea why we still get the error message (conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer) from iscsi_tgt server? is our vpp configuration  correct?  Thank you.


(On Server-192.168.2.10):
[root(a)spdk1 spdk]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show int
              Name               Idx                           State   Counter                    Count
TenGigabitEthernet82/0/0          1         up         rx packets                1863376
                                                                                         rx bytes             2744564878
                                                                                         tx packets               1874613
                                                                                         tx bytes               139636606
                                                                                         drops                               4272
                                                                                         punts                                     4
                                                                                         ip4                             1863210
                                                                                         ip6                                        10
local0                            0                                down

vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.10/24
local0 (dn):
vpp#

[root(a)spdk1 spdk]# ./app/iscsi_tgt/iscsi_tgt -m 0x0101
Starting SPDK v18.04-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0x0101 --file-prefix=spdk_pid14476 ]
EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.spdk_pid14476_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 650:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x3
reactor.c: 434:_spdk_reactor_run: *NOTICE*: Reactor started on core 8 on socket 1
reactor.c: 434:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL:   using IOMMU type 1 (Type 1)
lvol.c:1051:_spdk_lvs_verify_lvol_name: *ERROR*: lvol with name lbd_0 already exists
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target0 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000007, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off

(On Client-192.168.2.10):
[root(a)spdk2 ~]#iscsiadm -m discovery -t sendtargets -p 192.168.2.10
192.168.2.10:3260,1 iqn.2016-06.io.spdk:Target0
[root(a)spdk2 ~]#iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 192.168.2.10,3260] (multiple)
Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 192.168.2.10,3260] successful.
[root(a)spdk2 ~]#
[root(a)spdk2 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 Apr 25 15:03 /dev/sda
brw-rw---- 1 root disk 8,   1 Apr 25 15:03 /dev/sda1
brw-rw---- 1 root disk 8,   2 Apr 25 15:03 /dev/sda2
brw-rw---- 1 root disk 8,   3 Apr 25 15:03 /dev/sda3
brw-rw---- 1 root disk 8,  16 Apr 25 15:03 /dev/sdb
brw-rw---- 1 root disk 8,  32 Apr 25 15:03 /dev/sdc
brw-rw---- 1 root disk 8,  48 Apr 25 15:03 /dev/sdd
brw-rw---- 1 root disk 8,  49 Apr 25 15:03 /dev/sdd1
brw-rw---- 1 root disk 8,  64 Apr 25 15:03 /dev/sde
brw-rw---- 1 root disk 8,  80 Apr 25 15:03 /dev/sdf
brw-rw---- 1 root disk 8,  96 Apr 25 15:03 /dev/sdg
brw-rw---- 1 root disk 8, 112 Apr 25 15:03 /dev/sdh
brw-rw---- 1 root disk 8, 128 Apr 25 15:03 /dev/sdi
brw-rw---- 1 root disk 8, 144 Apr 25 15:03 /dev/sdj
brw-rw---- 1 root disk 8, 160 Apr 25 15:03 /dev/sdk
brw-rw---- 1 root disk 8, 176 Apr 25 15:03 /dev/sdl
brw-rw---- 1 root disk 8, 192 Apr 25 15:03 /dev/sdm
brw-rw---- 1 root disk 8, 208 Apr 25 15:03 /dev/sdn
brw-rw---- 1 root disk 8, 224 Apr 25 15:03 /dev/sdo
brw-rw---- 1 root disk 8, 240 Apr 25 19:17 /dev/sdp

Isaac
From: Isaac Otsiabah
Sent: Monday, April 23, 2018 4:43 PM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Subject: SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors


Tomasz, Edward and I have been working on this again today. We did not create any veth nor tap device. Instead in vppctl we set the 10G interface to 192.168.2.10/10 and then upped the interface as shown below.

(On Server):
root(a)spdk1 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.10/24
local0 (dn):

The client ip address is 192.168.2.20

The we started the iscsi_tgt server and executed the commands below.

python /root/spdk_vpp/spdk/scripts/rpc.py add_portal_group 1 192.168.2.10:3260
python /root/spdk_vpp/spdk/scripts/rpc.py add_initiator_group 2 ANY 192.168.2.20/24
python /root/spdk_vpp/spdk/scripts/rpc.py construct_malloc_bdev -b MyBdev 64 512
python /root/spdk_vpp/spdk/scripts/rpc.py construct_target_node Target3 Target3_alias MyBdev:0 1:2 64

We got these errors from iscsi_tgt server (as shown below).

conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 2): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
iscsi.c:2601:spdk_iscsi_op_logout: *NOTICE*: Logout from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 3): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000002, TSIH=4, CID=0, HeaderDigest=off, DataDigest=off

(On client):
However, we can see the iscsi devices from the client machine. Any suggestion on how to get rid of these errors? Were the above steps correct?

Isaac/Edward

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Tuesday, April 17, 2018 7:49 PM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Thank you for all the detailed descriptions, it really helps to understand the steps you took.

I see that you are working with two hosts and using network cards (TenGigabitEthernet82). Actually all you did in "STEP1" is enough for iscsi_tgt to listen on 192.168.2.20. "STEP2" is not required. Only thing left to do on target is to configure portal/initiator_group/target_node, described here<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>.

"Example: Tap interfaces on a single host" is describing situation when someone would like to try out VPP without using another host and "real" network cards. Same goes for veth interfaces used in scripts for per-patch tests - they are done on single host.
Thinking back, there should be second example with exact setup that you have - two hosts using network cards. I will look into it.

Thanks for all the feedback !

PS. Patch with VPP implementation is merged on master as of today, no need to cherry-pick anymore.

Regards,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 18, 2018 1:29 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, after executing the commands from the paragraph Example: Tap interfaces on a single host in this link (http://www.spdk.io/doc/iscsi.html#vpp) document, when I ping vpp from the target Server i get a respond.


[root(a)spdk2 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# tap connect tap0
tapcli-0
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2        down      drops                          8
vpp# set interface state tapcli-0 up
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       drops                          8
vpp# set interface ip address tapcli-0 192.168.2.20/24
vpp# show int addr
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
 192.168.2.20/24

ip addr add 192.168.2.202/24 dev tap0
ip link set tap0 up

/* pinging vpp from target Server */
[root(a)spdk2 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.116 ms

My question is, what about the 10G interface as shown above "TenGigabitEthernet82/0/0          1        down"? The document does not say anything about it. Shouldn't I set ip address for it and up it?



Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 65595 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [SPDK] SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors
@ 2018-04-23 23:43 Isaac Otsiabah
  0 siblings, 0 replies; 5+ messages in thread
From: Isaac Otsiabah @ 2018-04-23 23:43 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 18818 bytes --]


Tomasz, Edward and I have been working on this again today. We did not create any veth nor tap device. Instead in vppctl we set the 10G interface to 192.168.2.10/10 and then upped the interface as shown below.

(On Server):
root(a)spdk1 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.10/24
local0 (dn):

The client ip address is 192.168.2.20

The we started the iscsi_tgt server and executed the commands below.

python /root/spdk_vpp/spdk/scripts/rpc.py add_portal_group 1 192.168.2.10:3260
python /root/spdk_vpp/spdk/scripts/rpc.py add_initiator_group 2 ANY 192.168.2.20/24
python /root/spdk_vpp/spdk/scripts/rpc.py construct_malloc_bdev -b MyBdev 64 512
python /root/spdk_vpp/spdk/scripts/rpc.py construct_target_node Target3 Target3_alias MyBdev:0 1:2 64

We got these errors from iscsi_tgt server (as shown below).

conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 2): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
iscsi.c:2601:spdk_iscsi_op_logout: *NOTICE*: Logout from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 3): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000002, TSIH=4, CID=0, HeaderDigest=off, DataDigest=off

(On client):
However, we can see the iscsi devices from the client machine. Any suggestion on how to get rid of these errors? Were the above steps correct?

Isaac/Edward

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Tuesday, April 17, 2018 7:49 PM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Thank you for all the detailed descriptions, it really helps to understand the steps you took.

I see that you are working with two hosts and using network cards (TenGigabitEthernet82). Actually all you did in "STEP1" is enough for iscsi_tgt to listen on 192.168.2.20. "STEP2" is not required. Only thing left to do on target is to configure portal/initiator_group/target_node, described here<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>.

"Example: Tap interfaces on a single host" is describing situation when someone would like to try out VPP without using another host and "real" network cards. Same goes for veth interfaces used in scripts for per-patch tests - they are done on single host.
Thinking back, there should be second example with exact setup that you have - two hosts using network cards. I will look into it.

Thanks for all the feedback !

PS. Patch with VPP implementation is merged on master as of today, no need to cherry-pick anymore.

Regards,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 18, 2018 1:29 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, after executing the commands from the paragraph Example: Tap interfaces on a single host in this link (http://www.spdk.io/doc/iscsi.html#vpp) document, when I ping vpp from the target Server i get a respond.


[root(a)spdk2 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# tap connect tap0
tapcli-0
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2        down      drops                          8
vpp# set interface state tapcli-0 up
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       drops                          8
vpp# set interface ip address tapcli-0 192.168.2.20/24
vpp# show int addr
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
 192.168.2.20/24

ip addr add 192.168.2.202/24 dev tap0
ip link set tap0 up

/* pinging vpp from target Server */
[root(a)spdk2 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.116 ms

My question is, what about the 10G interface as shown above "TenGigabitEthernet82/0/0          1        down"? The document does not say anything about it. Shouldn't I set ip address for it and up it?



Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 50782 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-05-07 23:58 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-03 19:40 [SPDK] SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors Isaac Otsiabah
  -- strict thread matches above, loose matches on Subject: below --
2018-05-07 23:58 Zawadzki, Tomasz
2018-04-27 14:46 Zawadzki, Tomasz
2018-04-26 18:22 Isaac Otsiabah
2018-04-23 23:43 Isaac Otsiabah

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.