All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-04-13 18:00 Isaac Otsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: Isaac Otsiabah @ 2018-04-13 18:00 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7575 bytes --]


Hi Tomasz, I configured vpp and I thought it was working because I can  ping the initiator machine (192.168.2.10) on my lab private network.

(On target Server):
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):
vpp#


vpp# show int
              Name               Idx       State          Counter                                 Count
TenGigabitEthernet82/0/0          1         up       rx packets                    58
                                                                                       rx bytes                    5342
                                                                                       tx packets                    58
                                                                                       tx bytes                    5324
                                                                                       drops                         71      (drops were expected because I did a ping to itself on 192.168.2.20)
                                                                                       ip4                           49
local0                                                    0        down

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.1229 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0412 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0733 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0361 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0332 ms

Statistics: 5 sent, 5 received, 0% packet loss


However, something is missing because i started the iscsi_tgt server (below) and executed add_portal_group RPC command to add a portal group and got a bind() error below. Iscsi.conf is attached (portal group and target sections are commented out). The iscsi_tgt binary was linked to the same vpp library that I built.

[root(a)spdk2 spdk]# ./scripts/rpc.py add_portal_group 1 192.168.2.20:3260
Got JSON-RPC error response
request:
{
  "params": {
    "portals": [
      {
        "host": "192.168.2.20",
        "port": "3260"
      }
    ],
    "tag": 1
  },
  "jsonrpc": "2.0",
  "method": "add_portal_group",
  "id": 1
}
response:
{
  "message": "Invalid parameters",
  "code": -32602
}

[root(a)spdk2 spdk]# ./app/iscsi_tgt/iscsi_tgt -m 0x0101
Starting SPDK v18.04-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0x0101 --file-prefix=spdk_pid8974 ]
EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.spdk_pid8974_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 645:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x3
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 8 on socket 1
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 318:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
init_grp.c: 107:spdk_iscsi_init_grp_add_initiator: *WARNING*: Please use "ANY" instead of "ALL"
init_grp.c: 108:spdk_iscsi_init_grp_add_initiator: *WARNING*: Converting "ALL" to "ANY" automatically

posix.c: 221:spdk_posix_sock_create: *ERROR*: bind() failed, errno = 99
posix.c: 231:spdk_posix_sock_create: *ERROR*: IP address 192.168.2.20 not available. Verify IP address in config file and make sure setup script is run before starting spdk app.
portal_grp.c: 190:spdk_iscsi_portal_open: *ERROR*: listen error 192.168.2.20.3260
iscsi_rpc.c: 969:spdk_rpc_add_portal_group: *ERROR*: portal_grp_open failed







From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 24591 bytes --]

[-- Attachment #3: iscsi-conf.tar --]
[-- Type: application/x-tar, Size: 20480 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-09-05 18:10 Edward.Yang
  0 siblings, 0 replies; 13+ messages in thread
From: Edward.Yang @ 2018-09-05 18:10 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 24816 bytes --]

Hi Tomasz,

Thank you very much for response.  
Now, Isaac is on vacation.

You said:
>One host is using SPDK iSCSI target with VPP (or posix for comparison) and the other SPDK iSCSI initiator.
Is there SPDK iSCSI initiator? (In our test, we setup iSCSI initiator without SPDK installed on the initiator machine).
 
You said:
>Since this is evaluation of iSCSI target Session API, please keep in mind that null bdevs were used to eliminate other types of bdevs from affecting the results.
About the null bdevs that you mentioned, did you mean that they are Malloc bdev devices?

The following is how we setup for iSCSI target with VPP.  
Please see if our procedures are fine. 

On iSCSI target (CentOS 7.4):
============
VPP was configured through /etc/vpp/startup.conf as:
cpu {
   ...
   main-core 1
   ...
   corelist-workers 2-5
   ...
   workers 4
}

# One 10G NIC with its PCI address and parameters:
dev 0000:82:00.0 {
   num-rx-queues 4
   num-rx-desc 1024
}

Then, we start VPP and set the interface as below:
ifdown enp130s0f0
/root/spdk_vpp_pr/spdk/dpdk/usertools/dpdk-devbind.py --bind=uio_pci_generic 82:00.0
systemctl start vpp
vppctl set interface state TenGigabitEthernet82/0/0 up
vppctl set interface ip address TenGigabitEthernet82/0/0 192.168.2.10/24

And, we start iSCSI target and construct 4 malloc block devices, each 256MB and 512B sector size, as:
/root/spdk_vpp_pr/spdk/app/iscsi_tgt/iscsi_tgt -m 0x01 -c /usr/local/etc/spdk/iscsi.conf
python /root/spdk_vpp_pr/spdk/scripts/rpc.py add_portal_group 1 192.168.2.10:3260
python /root/spdk_vpp_pr/spdk/scripts/rpc.py add_initiator_group 2 ANY 192.168.2.50/24
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Malloc0 256 512
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Malloc1 256 512
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Malloc2 256 512
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Malloc3 256 512
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_target_node disk1 "Data Disk1" "Malloc0:0 Malloc1:1 Malloc2:2 Malloc3:3" 1:2 64 -d

On iSCSI initiator (CentOS 7.4):
iscsiadm -m discovery -t sendtargets -p 192.168.2.10
iscsiadm -m node --login  (after login,  /dev/sdd - /dev/sdg are added)

Then, we run fio through the following job file, fio_vpp_randread.txt, as:
[global]
ioengine=libaio
direct=1
ramp_time=15
runtime=60
iodepth=32
randrepeat=0
bs=4K
group_reporting
time_based

[job1]
rw=randread
filename=/dev/sdd
name=raw-random-read

[job2]
rw=randread
filename=/dev/sde
name=raw-random-read

[job3]
rw=randread
filename=/dev/sdf
name=raw-random-read

[job4]
rw=randread
filename=/dev/sdg
name=raw-random-read

The fio job report:

[root(a)gluster3 ~]# fio fio_vpp_randread.txt
raw-random-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
raw-random-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
raw-random-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
raw-random-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=338MiB/s,w=0KiB/s][r=86.6k,w=0 IOPS][eta 00m:00s]   
raw-random-read: (groupid=0, jobs=4): err= 0: pid=5355: Wed Sep  5 12:20:11 2018
   read: IOPS=89.2k, BW=348MiB/s (365MB/s)(20.4GiB/60002msec)
    slat (nsec): min=1543, max=1054.4k, avg=9189.80, stdev=11823.44
    clat (usec): min=304, max=8993, avg=1422.99, stdev=526.70
     lat (usec): min=318, max=9000, avg=1432.50, stdev=525.91
    clat percentiles (usec):
     |  1.00th=[  652],  5.00th=[  799], 10.00th=[  898], 20.00th=[ 1004],
     | 30.00th=[ 1106], 40.00th=[ 1205], 50.00th=[ 1303], 60.00th=[ 1434],
     | 70.00th=[ 1582], 80.00th=[ 1778], 90.00th=[ 2114], 95.00th=[ 2409],
     | 99.00th=[ 3163], 99.50th=[ 3556], 99.90th=[ 4490], 99.95th=[ 4883],
     | 99.99th=[ 5735]
   bw (  KiB/s): min=68987, max=134923, per=25.08%, avg=89483.28, stdev=14270.72, samples=480
   iops        : min=17246, max=33730, avg=22370.52, stdev=3567.67, samples=480
  lat (usec)   : 500=0.02%, 750=3.37%, 1000=15.86%
  lat (msec)   : 2=68.19%, 4=12.33%, 10=0.22%
  cpu          : usr=6.75%, sys=20.85%, ctx=2226569, majf=0, minf=749
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=127.2%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwt: total=5352130,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=348MiB/s (365MB/s), 348MiB/s-348MiB/s (365MB/s-365MB/s), io=20.4GiB (21.9GB), run=60002-60002msec

Disk stats (read/write):
  sdd: ios=1732347/0, merge=19057/0, ticks=2248152/0, in_queue=2247789, util=99.91%
  sde: ios=1558604/0, merge=18701/0, ticks=2314201/0, in_queue=2314047, util=99.92%
  sdf: ios=1719023/0, merge=19309/0, ticks=2257093/0, in_queue=2256728, util=99.94%
  sdg: ios=1723574/0, merge=18756/0, ticks=2253409/0, in_queue=2253174, util=99.95%

Thanks for any advices.

Regards,
Edward

>-----Original Message-----
>From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
>Sent: Friday, August 31, 2018 12:16 AM
>To: Otsiabah, Isaac <IOtsiabah(a)us.fujitsu.com>; Storage Performance
>Development Kit <spdk(a)lists.01.org>; 'Isaac Otsiabah'
>Cc: Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul
><PVonStamwitz(a)us.fujitsu.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>
>Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
>
>Hello Isaac,
>
>Sorry for delayed response.
>
>Please keep in mind that the patch for switch from VCL to Session API is in
>active development, with changes being applied regularly.
>https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_-
>23_c_spdk_spdk_-2B_417056_&d=DwIFAg&c=09aR81AqZjK9FqV5BSCPBw&r=-
>Vl2krtpQKVbTcyqQsihSQLZVp3NEoqJsxIT3rBIgJk&m=zBi2rfFfwt1hJFK1mCXeH7
>BK1b9GjLxSZ8CEmjaP-
>Dk&s=0ggEYmwl4WBiSgplNGj9Cqya9bDKmXoQJubuybSiCdg&e=
>We are still trying to work out best practices and recommended setup for SPDK
>iSCSI target running along with VPP.
>
>Our test environments at this time consists of Fedora 26 machines, using either
>one or two 40GB/s interfaces per host. One host is using SPDK iSCSI target with
>VPP (or posix for comparison) and the other SPDK iSCSI initiator.
>After switch to Session API we were able to saturate a single 40GB/s interface
>with much lower core usage in VPP compared to VCL. As well as reduce
>number of SPDK iSCSI target cores used in such setup. Both Session API and
>posix implementation were able to saturate 40GB/s, while having similar CPU
>efficiency. We are working on evaluating higher throughputs (80GB/s and
>more), as well looking at optimizations to usage of Sessions API within SPDK.
>
>We have not seen much change from modifying most VPP config parameters
>from defaults, at this time, for our setup. Keeping default num-mbufs and
>socket-mem to 1024. Mostly changing parameters regarding number of worker
>cores and num-rx-queues.
>For iSCSI parameters, both for posix and Session API, at certain throughputs
>increasing number of targets/luns within portal group were needed. We were
>doing out comparisons at around 32-128 target/luns. Since this is evaluation of
>iSCSI target Session API, please keep in mind that null bdevs were used to
>eliminate other types of bdevs from affecting the results.
>Besides that key point for higher throughputs is having iSCSI initiator actually
>be able to generate enough traffic for iSCSI target.
>
>May I ask what kind of setup you are using for the comparisons ? Are you
>targeting 10GB/s interfaces as noted in previous emails ?
>
>Tomek
>
>-----Original Message-----
>From: IOtsiabah(a)us.fujitsu.com [mailto:IOtsiabah(a)us.fujitsu.com]
>Sent: Friday, August 31, 2018 12:51 AM
>To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac
>Otsiabah' <IMCEAEX-
>_O=FMSA_OU=EXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23
>SPDLT+29_CN=RECIPIENTS_CN=IOTSIABAH(a)fujitsu.local>; Zawadzki, Tomasz
><tomasz.zawadzki(a)intel.com>
>Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>;
>Edward.Yang(a)us.fujitsu.com; PVonStamwitz(a)us.fujitsu.com;
>Edward.Yang(a)us.fujitsu.com
>Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
>
>Hi Tomasz,  you probably went on vacation and are back now. Previously, I
>sent you the two emails below. Please, can you respond to them for us? Thank
>you.
>
>Isaac
>
>-----Original Message-----
>From: Otsiabah, Isaac
>Sent: Tuesday, August 14, 2018 12:40 PM
>To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac
>Otsiabah'; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
>Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Yang, Edward
><Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul
><PVonStamwitz(a)us.fujitsu.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>
>Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
>
>Tomasz, we can increased the amount of hugepages used by vpp by increasing
>the dpdk parameters
>
>   socket-mem 1024
>   num-mbufs 65536
>
>however, there were no improvement in fio performance test results. We are
>running our test on Centos 7. Are you testing vpp on Fedora instead? Can you
>share with us your test environment information?
>
>Isaac/Edward
>
>-----Original Message-----
>From: Otsiabah, Isaac
>Sent: Tuesday, August 14, 2018 10:15 AM
>To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac
>Otsiabah'; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
>Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Yang, Edward
><Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul
><PVonStamwitz(a)us.fujitsu.com>; Yang, Edward
><Edward.Yang(a)us.fujitsu.com>; Otsiabah, Isaac <IOtsiabah(a)us.fujitsu.com>
>Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
>
>Hi Tomasz, we obtained your vpp patch 417056 (git fetch
>https://urldefense.proofpoint.com/v2/url?u=https-
>3A__review.gerrithub.io_spdk_spdk&d=DwIFAg&c=09aR81AqZjK9FqV5BSCPBw
>&r=-
>Vl2krtpQKVbTcyqQsihSQLZVp3NEoqJsxIT3rBIgJk&m=zBi2rfFfwt1hJFK1mCXeH7
>BK1b9GjLxSZ8CEmjaP-
>Dk&s=Je6FxQzCr9VtkhD7ayQzCaSkQp_yXz_166aHDjWwAOA&e=
>refs/changes/56/417056/16:test16) We are testing it and have a few questions.
>
>1.. Please, can you share with us your test results and your test environment
>setup or configuration?
>
>2.   From experiment, we  see that vpp always uses 105 pages from the
>available hugepages in the system regardless of the amount available. Is there
>a way to increase the amount of huge pages for vpp?
>
>Isaac
>
>From: Isaac Otsiabah
>Sent: Tuesday, April 17, 2018 11:46 AM
>To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org'
><spdk(a)lists.01.org>
>Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel
><daniel.verkamp(a)intel.com>; Paul Von-Stamwitz
><PVonStamwitz(a)us.fujitsu.com>
>Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
>
>Hi Tomasz, I got the SPDK patch. My network topology is simple but making
>the network ip address accessible to the iscsi_tgt application and to vpp is not
>working. From my understanding, vpp is started first on the target host and
>then iscsi_tgt application is started after the network setup is done (please,
>correct me if this is not the case).
>
>
>    -------  192.168.2.10
>    |      |  initiator
>    -------
>        |
>        |
>        |
>-------------------------------------------- 192.168.2.0
>                                    |
>                                    |
>                                    |  192.168.2.20
>                                --------------   vpp, vppctl
>                                |                |  iscsi_tgt
>                                --------------
>
>Both system have a 10GB NIC
>
>(On target Server):
>I set up the vpp environment variables through sysctl command.
>I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the
>first  10GB NIC (device address= 0000:82:00.0).
>That worked so I started the vpp application and from the startup output, the
>NIC is in used by vpp
>
>[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
>vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
>load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
>load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development
>Kit (DPDK))
>load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
>load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
>load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator
>addressing for IPv6)
>load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
>load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
>load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data
>plane)
>load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
>load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
>load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid
>Deployment on IPv4 Infrastructure (RFC5969))
>load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory
>Interface (experimetal))
>load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address
>Translation)
>load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
>load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for
>Container integration)
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/acl_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/lb_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/memif_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/nat_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
>vpp[4168]: load_one_plugin:63: Loaded plugin:
>/usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
>vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir
>/run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-
>mem 64,64
>EAL: No free hugepages reported in hugepages-1048576kB
>EAL: VFIO support initialized
>DPDK physical memory layout:
>Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0,
>hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: IOVA:0x3e000000,
>len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152,
>nchannel:0, nrank:0 Segment 2: IOVA:0x3fc00000, len:2097152,
>virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
>Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000,
>socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4:
>IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1,
>hugepage_sz:2097152, nchannel:0, nran
>
>STEP1:
>Then from vppctl command prompt, I set up ip address for the 10G interface
>and up it. From vpp, I can ping the initiator machine and vice versa as shown
>below.
>
>vpp# show int
>              Name               Idx       State          Counter          Count
>TenGigabitEthernet82/0/0          1        down
>local0                            0        down
>vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24 vpp#
>set interface state TenGigabitEthernet82/0/0 up vpp# show int
>              Name               Idx       State          Counter          Count
>TenGigabitEthernet82/0/0          1         up
>local0                            0        down
>vpp# show int address
>TenGigabitEthernet82/0/0 (up):
>  192.168.2.20/24
>local0 (dn):
>
>/* ping initiator from vpp */
>
>vpp# ping 192.168.2.10
>64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
>64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
>64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
>64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
>64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms
>
>(On Initiator):
>/* ping vpp interface from initiator*/
>[root(a)spdk1 ~]# ping -c 2 192.168.2.20
>PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
>64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
>64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms
>
>STEP2:
>However, when I start the iscsi_tgt server, it does not have access to the above
>192.168.2.x subnet so I ran these commands on the target server to create
>veth and then connected it to a vpp host-interface as follows:
>
>ip link add name vpp1out type veth peer name vpp1host ip link set dev
>vpp1out up ip link set dev vpp1host up ip addr add 192.168.2.201/24 dev
>vpp1host
>
>vpp# create host-interface name vpp1out
>vpp# set int state host-vpp1out up
>vpp# set int ip address host-vpp1out 192.168.2.202 vpp# show int addr
>TenGigabitEthernet82/0/0 (up):
>  192.168.2.20/24
>host-vpp1out (up):
>  192.168.2.202/24
>local0 (dn):
>vpp# trace add af-packet-input 10
>
>
>/* From host, ping vpp */
>
>[root(a)spdk2 ~]# ping -c 2 192.168.2.202
>PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
>64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
>64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms
>
>/* From vpp, ping host */
>vpp# ping 192.168.2.201
>64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
>64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
>64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
>64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
>64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms
>
>Statistics: 5 sent, 5 received, 0% packet loss
>
From the target host,I still cannot ping the initiator (192.168.2.10), it does not
>go through the vpp interface so my vpp interface connection is not correct.
>
>Please, how does one create the vpp host interface and connect it, so that host
>applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2,
>should I use a different subnet like 192.168.3.X and turn on IP forwarding add a
>route to the routing table?
>
>Isaac
>
>From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
>Sent: Thursday, April 12, 2018 12:27 AM
>To: Isaac Otsiabah
><IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
>Cc: Harris, James R
><james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp,
>Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul
>Von-Stamwitz
><PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
>Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
>
>Hello Isaac,
>
>Are you using following patch ? (I suggest cherry picking it)
>https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_-
>23_c_389566_&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiU
>sKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-
>xkYWjIUTA2lTbCWuTg&s=FE90i1g4fLqz2TZ_eM5V21BWuBXg2eB7L18qpVk7DS
>M&e=
>
>SPDK iSCSI target can be started without specific interface to bind on, by not
>specifying any target nodes or portal groups. They can be added later via RPC
>https://urldefense.proofpoint.com/v2/url?u=http-
>3A__www.spdk.io_doc_iscsi.html-23iscsi-
>5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJ
>f45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-
>xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-
>h5zZerTcOn1D9wfxM&e=.
>Please see https://urldefense.proofpoint.com/v2/url?u=https-
>3A__github.com_spdk_spdk_blob_master_test_iscsi-
>5Ftgt_lvol_iscsi.conf&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQC
>OCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-
>xkYWjIUTA2lTbCWuTg&s=jSKH9IX5rn3DlmRDFR35I4V5I-
>bT1xxWSqSp1pIXygw&e= for example of minimal iSCSI config.
>
>Suggested flow of starting up applications is:
>
>1.       Unbind interfaces from kernel
>
>2.       Start VPP and configure the interface via vppctl
>
>3.       Start SPDK
>
>4.       Configure the iSCSI target via RPC, at this time it should be possible to
>use the interface configured in VPP
>
>Please note, there is some leeway here. The only requirement is having VPP
>app started before SPDK app.
>Interfaces in VPP can be created (like tap or veth) and configured at runtime,
>and are available for use in SPDK as well.
>
>Let me know if you have any questions.
>
>Tomek
>
>From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
>Sent: Wednesday, April 11, 2018 8:47 PM
>To: Zawadzki, Tomasz
><tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
>Cc: Harris, James R
><james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp,
>Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul
>Von-Stamwitz
><PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
>Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
>
>Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos
>7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt
>application.
>
>For VPP, first, I unbind the nick from the kernel as and start VPP application.
>
>./usertools/dpdk-devbind.py -u 0000:07:00.0
>
>vpp unix {cli-listen /run/vpp/cli.sock}
>
>Unbinding the nic takes down the interface, however,
>the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind
>to during startup so it fails to start. The information at:
>"Running SPDK with VPP
>VPP application has to be started before SPDK iSCSI target, in order to enable
>usage of network interfaces. After SPDK iSCSI target initialization finishes,
>interfaces configured within VPP will be available to be configured as portal
>addresses. Please refer to Configuring iSCSI Target via RPC
>method<https://urldefense.proofpoint.com/v2/url?u=http-
>3A__www.spdk.io_doc_iscsi.html-23iscsi-
>5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJ
>f45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-
>xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-
>h5zZerTcOn1D9wfxM&e=>."
>
>is not clear because the instructions at the "Configuring iSCSI Traget via RPC
>method" suggest the iscsi_tgt server is running for one to be able to execute
>the RPC commands but, how do I get the iscsi_tgt server running without an
>interface to bind on during its initialization?
>
>Please, can anyone of you help to explain how to run the SPDK iscsi_tgt
>application with VPP (for instance, what should change in iscsi.conf?) after
>unbinding the nic, how do I get the iscsi_tgt server to start without an interface
>to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?
>
>I would appreciate if anyone would help. Thank you.
>
>
>Isaac
>_______________________________________________
>SPDK mailing list
>SPDK(a)lists.01.org
>https://urldefense.proofpoint.com/v2/url?u=https-
>3A__lists.01.org_mailman_listinfo_spdk&d=DwICAg&c=09aR81AqZjK9FqV5BSC
>PBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeG
>WugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=2iHpVGzaloMHLL179exqyisY-
>BLZOoEFh5Y4Z7SArYs&e=

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-08-31  7:16 Zawadzki, Tomasz
  0 siblings, 0 replies; 13+ messages in thread
From: Zawadzki, Tomasz @ 2018-08-31  7:16 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 17850 bytes --]

Hello Isaac,

Sorry for delayed response.

Please keep in mind that the patch for switch from VCL to Session API is in active development, with changes being applied regularly. https://review.gerrithub.io/#/c/spdk/spdk/+/417056/
We are still trying to work out best practices and recommended setup for SPDK iSCSI target running along with VPP.

Our test environments at this time consists of Fedora 26 machines, using either one or two 40GB/s interfaces per host. One host is using SPDK iSCSI target with VPP (or posix for comparison) and the other SPDK iSCSI initiator.
After switch to Session API we were able to saturate a single 40GB/s interface with much lower core usage in VPP compared to VCL. As well as reduce number of SPDK iSCSI target cores used in such setup. Both Session API and posix implementation were able to saturate 40GB/s, while having similar CPU efficiency. We are working on evaluating higher throughputs (80GB/s and more), as well looking at optimizations to usage of Sessions API within SPDK.

We have not seen much change from modifying most VPP config parameters from defaults, at this time, for our setup. Keeping default num-mbufs and socket-mem to 1024. Mostly changing parameters regarding number of worker cores and num-rx-queues.
For iSCSI parameters, both for posix and Session API, at certain throughputs increasing number of targets/luns within portal group were needed. We were doing out comparisons at around 32-128 target/luns. Since this is evaluation of iSCSI target Session API, please keep in mind that null bdevs were used to eliminate other types of bdevs from affecting the results.
Besides that key point for higher throughputs is having iSCSI initiator actually be able to generate enough traffic for iSCSI target.

May I ask what kind of setup you are using for the comparisons ? Are you targeting 10GB/s interfaces as noted in previous emails ?

Tomek

-----Original Message-----
From: IOtsiabah(a)us.fujitsu.com [mailto:IOtsiabah(a)us.fujitsu.com] 
Sent: Friday, August 31, 2018 12:51 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac Otsiabah' <IMCEAEX-_O=FMSA_OU=EXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23SPDLT+29_CN=RECIPIENTS_CN=IOTSIABAH(a)fujitsu.local>; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Edward.Yang(a)us.fujitsu.com; PVonStamwitz(a)us.fujitsu.com; Edward.Yang(a)us.fujitsu.com
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz,  you probably went on vacation and are back now. Previously, I sent you the two emails below. Please, can you respond to them for us? Thank you.

Isaac

-----Original Message-----
From: Otsiabah, Isaac 
Sent: Tuesday, August 14, 2018 12:40 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac Otsiabah'; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul <PVonStamwitz(a)us.fujitsu.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Tomasz, we can increased the amount of hugepages used by vpp by increasing the dpdk parameters

   socket-mem 1024
   num-mbufs 65536
 
however, there were no improvement in fio performance test results. We are running our test on Centos 7. Are you testing vpp on Fedora instead? Can you share with us your test environment information?

Isaac/Edward

-----Original Message-----
From: Otsiabah, Isaac 
Sent: Tuesday, August 14, 2018 10:15 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac Otsiabah'; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul <PVonStamwitz(a)us.fujitsu.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Otsiabah, Isaac <IOtsiabah(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, we obtained your vpp patch 417056 (git fetch https://review.gerrithub.io/spdk/spdk refs/changes/56/417056/16:test16) We are testing it and have a few questions.

1.. Please, can you share with us your test results and your test environment setup or configuration?

2.   From experiment, we  see that vpp always uses 105 pages from the available hugepages in the system regardless of the amount available. Is there a way to increase the amount of huge pages for vpp?

Isaac

From: Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24 vpp# set interface state TenGigabitEthernet82/0/0 up vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host ip link set dev vpp1out up ip link set dev vpp1host up ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202 vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it) https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_-23_c_389566_&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=FE90i1g4fLqz2TZ_eM5V21BWuBXg2eB7L18qpVk7DSM&e=

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spdk.io_doc_iscsi.html-23iscsi-5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=.
Please see https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_spdk_spdk_blob_master_test_iscsi-5Ftgt_lvol_iscsi.conf&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=jSKH9IX5rn3DlmRDFR35I4V5I-bT1xxWSqSp1pIXygw&e= for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spdk.io_doc_iscsi.html-23iscsi-5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_mailman_listinfo_spdk&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=2iHpVGzaloMHLL179exqyisY-BLZOoEFh5Y4Z7SArYs&e=

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-08-30 22:51 IOtsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: IOtsiabah @ 2018-08-30 22:51 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 15322 bytes --]

Hi Tomasz,  you probably went on vacation and are back now. Previously, I sent you the two emails below. Please, can you respond to them for us? Thank you.

Isaac

-----Original Message-----
From: Otsiabah, Isaac 
Sent: Tuesday, August 14, 2018 12:40 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac Otsiabah'; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul <PVonStamwitz(a)us.fujitsu.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Tomasz, we can increased the amount of hugepages used by vpp by increasing the dpdk parameters

   socket-mem 1024
   num-mbufs 65536
 
however, there were no improvement in fio performance test results. We are running our test on Centos 7. Are you testing vpp on Fedora instead? Can you share with us your test environment information?

Isaac/Edward

-----Original Message-----
From: Otsiabah, Isaac 
Sent: Tuesday, August 14, 2018 10:15 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac Otsiabah'; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul <PVonStamwitz(a)us.fujitsu.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Otsiabah, Isaac <IOtsiabah(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, we obtained your vpp patch 417056 (git fetch https://review.gerrithub.io/spdk/spdk refs/changes/56/417056/16:test16) We are testing it and have a few questions.

1.. Please, can you share with us your test results and your test environment setup or configuration?

2.   From experiment, we  see that vpp always uses 105 pages from the available hugepages in the system regardless of the amount available. Is there a way to increase the amount of huge pages for vpp?

Isaac

From: Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24 vpp# set interface state TenGigabitEthernet82/0/0 up vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host ip link set dev vpp1out up ip link set dev vpp1host up ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202 vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it) https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_-23_c_389566_&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=FE90i1g4fLqz2TZ_eM5V21BWuBXg2eB7L18qpVk7DSM&e=

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spdk.io_doc_iscsi.html-23iscsi-5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=.
Please see https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_spdk_spdk_blob_master_test_iscsi-5Ftgt_lvol_iscsi.conf&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=jSKH9IX5rn3DlmRDFR35I4V5I-bT1xxWSqSp1pIXygw&e= for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spdk.io_doc_iscsi.html-23iscsi-5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_mailman_listinfo_spdk&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=2iHpVGzaloMHLL179exqyisY-BLZOoEFh5Y4Z7SArYs&e=

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-08-14 19:39 IOtsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: IOtsiabah @ 2018-08-14 19:39 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 14666 bytes --]

Tomasz, we can increased the amount of hugepages used by vpp by increasing the dpdk parameters

   socket-mem 1024
   num-mbufs 65536
 
however, there were no improvement in fio performance test results. We are running our test on Centos 7. Are you testing vpp on Fedora instead? Can you share with us your test environment information?

Isaac/Edward

-----Original Message-----
From: Otsiabah, Isaac 
Sent: Tuesday, August 14, 2018 10:15 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac Otsiabah'; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul <PVonStamwitz(a)us.fujitsu.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Otsiabah, Isaac <IOtsiabah(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, we obtained your vpp patch 417056 (git fetch https://review.gerrithub.io/spdk/spdk refs/changes/56/417056/16:test16) We are testing it and have a few questions.

1.. Please, can you share with us your test results and your test environment setup or configuration?

2.   From experiment, we  see that vpp always uses 105 pages from the available hugepages in the system regardless of the amount available. Is there a way to increase the amount of huge pages for vpp?

Isaac

From: Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24 vpp# set interface state TenGigabitEthernet82/0/0 up vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host ip link set dev vpp1out up ip link set dev vpp1host up ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202 vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it) https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_-23_c_389566_&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=FE90i1g4fLqz2TZ_eM5V21BWuBXg2eB7L18qpVk7DSM&e=

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spdk.io_doc_iscsi.html-23iscsi-5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=.
Please see https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_spdk_spdk_blob_master_test_iscsi-5Ftgt_lvol_iscsi.conf&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=jSKH9IX5rn3DlmRDFR35I4V5I-bT1xxWSqSp1pIXygw&e= for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spdk.io_doc_iscsi.html-23iscsi-5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_mailman_listinfo_spdk&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=2iHpVGzaloMHLL179exqyisY-BLZOoEFh5Y4Z7SArYs&e=

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-08-14 17:15 IOtsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: IOtsiabah @ 2018-08-14 17:15 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 13771 bytes --]

Hi Tomasz, we obtained your vpp patch 417056 (git fetch https://review.gerrithub.io/spdk/spdk refs/changes/56/417056/16:test16) We are testing it and have a few questions.

1.. Please, can you share with us your test results and your test environment setup or configuration?

2.   From experiment, we  see that vpp always uses 105 pages from the available hugepages in the system regardless of the amount available. Is there a way to increase the amount of huge pages for vpp?

Isaac

From: Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24 vpp# set interface state TenGigabitEthernet82/0/0 up vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host ip link set dev vpp1out up ip link set dev vpp1host up ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202 vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it) https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_-23_c_389566_&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=FE90i1g4fLqz2TZ_eM5V21BWuBXg2eB7L18qpVk7DSM&e=

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spdk.io_doc_iscsi.html-23iscsi-5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=.
Please see https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_spdk_spdk_blob_master_test_iscsi-5Ftgt_lvol_iscsi.conf&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=jSKH9IX5rn3DlmRDFR35I4V5I-bT1xxWSqSp1pIXygw&e= for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spdk.io_doc_iscsi.html-23iscsi-5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_mailman_listinfo_spdk&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=2iHpVGzaloMHLL179exqyisY-BLZOoEFh5Y4Z7SArYs&e=

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-08-14 17:05 IOtsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: IOtsiabah @ 2018-08-14 17:05 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 12815 bytes --]

*         Hi Tomasz, we obtained your vpp patch 417056 (git fetch https://review.gerrithub.io/spdk/spdk<https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_spdk_spdk&d=DwQCaQ&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=66QFbTchr0DFnrHSvqjpQeoeDae8QGzh8kCz94NRpGo&s=7f3_65iqG3k6AZDYsZ1D1AWTIXC4eAP92-zgK1O__YM&e=> refs/changes/56/417056/16:test16) We are testing it and have a few questions.



1.. Please, can you share with us your test results and your test environment setup or configuration?



2.   From experiment, we  see that vpp always uses 105 pages from the available hugepages in the system regardless of the amount available. Is there a way to increase the amount of pages for vpp?





Isaac
From: Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-04-19 16:42 Isaac Otsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: Isaac Otsiabah @ 2018-04-19 16:42 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 15731 bytes --]

Tomasz, thank you for your response. I work for Paul Von-Stamwitz in Sunnyvale. Please, can you share with us

a)      what your hardware topology or setup like?

b)      Which OS (and version) did you use?

Thank you very much, my friend.

Isaac
From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Tuesday, April 17, 2018 7:49 PM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Thank you for all the detailed descriptions, it really helps to understand the steps you took.

I see that you are working with two hosts and using network cards (TenGigabitEthernet82). Actually all you did in "STEP1" is enough for iscsi_tgt to listen on 192.168.2.20. "STEP2" is not required. Only thing left to do on target is to configure portal/initiator_group/target_node, described here<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>.

"Example: Tap interfaces on a single host" is describing situation when someone would like to try out VPP without using another host and "real" network cards. Same goes for veth interfaces used in scripts for per-patch tests - they are done on single host.
Thinking back, there should be second example with exact setup that you have - two hosts using network cards. I will look into it.

Thanks for all the feedback !

PS. Patch with VPP implementation is merged on master as of today, no need to cherry-pick anymore.

Regards,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 18, 2018 1:29 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, after executing the commands from the paragraph Example: Tap interfaces on a single host in this link (http://www.spdk.io/doc/iscsi.html#vpp) document, when I ping vpp from the target Server i get a respond.


[root(a)spdk2 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# tap connect tap0
tapcli-0
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2        down      drops                          8
vpp# set interface state tapcli-0 up
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       drops                          8
vpp# set interface ip address tapcli-0 192.168.2.20/24
vpp# show int addr
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
 192.168.2.20/24

ip addr add 192.168.2.202/24 dev tap0
ip link set tap0 up

/* pinging vpp from target Server */
[root(a)spdk2 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.116 ms

My question is, what about the 10G interface as shown above "TenGigabitEthernet82/0/0          1        down"? The document does not say anything about it. Shouldn't I set ip address for it and up it?



Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 44849 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-04-18  2:49 Zawadzki, Tomasz
  0 siblings, 0 replies; 13+ messages in thread
From: Zawadzki, Tomasz @ 2018-04-18  2:49 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 15143 bytes --]

Hello Isaac,

Thank you for all the detailed descriptions, it really helps to understand the steps you took.

I see that you are working with two hosts and using network cards (TenGigabitEthernet82). Actually all you did in "STEP1" is enough for iscsi_tgt to listen on 192.168.2.20. "STEP2" is not required. Only thing left to do on target is to configure portal/initiator_group/target_node, described here<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>.

"Example: Tap interfaces on a single host" is describing situation when someone would like to try out VPP without using another host and "real" network cards. Same goes for veth interfaces used in scripts for per-patch tests - they are done on single host.
Thinking back, there should be second example with exact setup that you have - two hosts using network cards. I will look into it.

Thanks for all the feedback !

PS. Patch with VPP implementation is merged on master as of today, no need to cherry-pick anymore.

Regards,
Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 18, 2018 1:29 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org>; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, after executing the commands from the paragraph Example: Tap interfaces on a single host in this link (http://www.spdk.io/doc/iscsi.html#vpp) document, when I ping vpp from the target Server i get a respond.


[root(a)spdk2 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# tap connect tap0
tapcli-0
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2        down      drops                          8
vpp# set interface state tapcli-0 up
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       drops                          8
vpp# set interface ip address tapcli-0 192.168.2.20/24
vpp# show int addr
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
 192.168.2.20/24

ip addr add 192.168.2.202/24 dev tap0
ip link set tap0 up

/* pinging vpp from target Server */
[root(a)spdk2 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.116 ms

My question is, what about the 10G interface as shown above "TenGigabitEthernet82/0/0          1        down"? The document does not say anything about it. Shouldn't I set ip address for it and up it?



Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 42664 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-04-17 23:29 Isaac Otsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: Isaac Otsiabah @ 2018-04-17 23:29 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 13763 bytes --]

Hi Tomasz, after executing the commands from the paragraph Example: Tap interfaces on a single host in this link (http://www.spdk.io/doc/iscsi.html#vpp) document, when I ping vpp from the target Server i get a respond.

[root(a)spdk2 ~]# vppctl
    _______    _        _   _____  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /____(_)_/\___/   |___/_/  /_/

vpp# tap connect tap0
tapcli-0
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2        down      drops                          8
vpp# set interface state tapcli-0 up
vpp# show interface
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
tapcli-0                          2         up       drops                          8
vpp# set interface ip address tapcli-0 192.168.2.20/24
vpp# show int addr
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
 192.168.2.20/24

ip addr add 192.168.2.202/24 dev tap0
ip link set tap0 up

/* pinging vpp from target Server */
[root(a)spdk2 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.116 ms

My question is, what about the 10G interface as shown above "TenGigabitEthernet82/0/0          1        down"? The document does not say anything about it. Shouldn't I set ip address for it and up it?



Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 38283 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-04-17 18:46 Isaac Otsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: Isaac Otsiabah @ 2018-04-17 18:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 11519 bytes --]

Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).


    -------  192.168.2.10
    |      |  initiator
    -------
        |
        |
        |
-------------------------------------------- 192.168.2.0
                                    |
                                    |
                                    |  192.168.2.20
                                --------------   vpp, vppctl
                                |                |  iscsi_tgt
                                --------------

Both system have a 10GB NIC

(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first  10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp

[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran

STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.

vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1        down
local0                            0        down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
              Name               Idx       State          Counter          Count
TenGigabitEthernet82/0/0          1         up
local0                            0        down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):

/* ping initiator from vpp */

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms

(On Initiator):
/* ping vpp interface from initiator*/
[root(a)spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms

STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:

ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host

vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
host-vpp1out (up):
  192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10


/* From host, ping vpp */

[root(a)spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms

/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms

Statistics: 5 sent, 5 received, 0% packet loss

From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.

Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?

Isaac

From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 31237 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-04-13 22:08 Isaac Otsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: Isaac Otsiabah @ 2018-04-13 22:08 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8874 bytes --]

Tomasz, the spdk build I test with this morning was a built before you pointed me to the patch in your email yesterday. I am getting the patch you pointed me.

Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Friday, April 13, 2018 1:29 PM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org>; 'tomasz.zawadzki(a)intel.com' <tomasz.zawadzki(a)intel.com>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?


Tomasz, I think the problem is the linking into the vpp library because if i build the iscsi_tgt without linking into the vpp, the RPC commands work.

Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Friday, April 13, 2018 11:00 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>; 'tomasz.zawadzki(a)intel.com' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?


Hi Tomasz, I configured vpp and I thought it was working because I can  ping the initiator machine (192.168.2.10) on my lab private network.

(On target Server):
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):
vpp#


vpp# show int
              Name               Idx       State          Counter                                 Count
TenGigabitEthernet82/0/0          1         up       rx packets                    58
                                                                                       rx bytes                    5342
                                                                                       tx packets                    58
                                                                                       tx bytes                    5324
                                                                                       drops                         71      (drops were expected because I did a ping to itself on 192.168.2.20)
                                                                                       ip4                           49
local0                                                    0        down

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.1229 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0412 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0733 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0361 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0332 ms

Statistics: 5 sent, 5 received, 0% packet loss


However, something is missing because i started the iscsi_tgt server (below) and executed add_portal_group RPC command to add a portal group and got a bind() error below. Iscsi.conf is attached (portal group and target sections are commented out). The iscsi_tgt binary was linked to the same vpp library that I built.

[root(a)spdk2 spdk]# ./scripts/rpc.py add_portal_group 1 192.168.2.20:3260
Got JSON-RPC error response
request:
{
  "params": {
    "portals": [
      {
        "host": "192.168.2.20",
        "port": "3260"
      }
    ],
    "tag": 1
  },
  "jsonrpc": "2.0",
  "method": "add_portal_group",
  "id": 1
}
response:
{
  "message": "Invalid parameters",
  "code": -32602
}

[root(a)spdk2 spdk]# ./app/iscsi_tgt/iscsi_tgt -m 0x0101
Starting SPDK v18.04-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0x0101 --file-prefix=spdk_pid8974 ]
EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.spdk_pid8974_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 645:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x3
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 8 on socket 1
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 318:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
init_grp.c: 107:spdk_iscsi_init_grp_add_initiator: *WARNING*: Please use "ANY" instead of "ALL"
init_grp.c: 108:spdk_iscsi_init_grp_add_initiator: *WARNING*: Converting "ALL" to "ANY" automatically

posix.c: 221:spdk_posix_sock_create: *ERROR*: bind() failed, errno = 99
posix.c: 231:spdk_posix_sock_create: *ERROR*: IP address 192.168.2.20 not available. Verify IP address in config file and make sure setup script is run before starting spdk app.
portal_grp.c: 190:spdk_iscsi_portal_open: *ERROR*: listen error 192.168.2.20.3260
iscsi_rpc.c: 969:spdk_rpc_add_portal_group: *ERROR*: portal_grp_open failed







From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 27299 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
@ 2018-04-13 20:29 Isaac Otsiabah
  0 siblings, 0 replies; 13+ messages in thread
From: Isaac Otsiabah @ 2018-04-13 20:29 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8242 bytes --]


Tomasz, I think the problem is the linking into the vpp library because if i build the iscsi_tgt without linking into the vpp, the RPC commands work.

Isaac
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Friday, April 13, 2018 11:00 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org>; 'tomasz.zawadzki(a)intel.com' <tomasz.zawadzki(a)intel.com>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?


Hi Tomasz, I configured vpp and I thought it was working because I can  ping the initiator machine (192.168.2.10) on my lab private network.

(On target Server):
vpp# show int address
TenGigabitEthernet82/0/0 (up):
  192.168.2.20/24
local0 (dn):
vpp#


vpp# show int
              Name               Idx       State          Counter                                 Count
TenGigabitEthernet82/0/0          1         up       rx packets                    58
                                                                                       rx bytes                    5342
                                                                                       tx packets                    58
                                                                                       tx bytes                    5324
                                                                                       drops                         71      (drops were expected because I did a ping to itself on 192.168.2.20)
                                                                                       ip4                           49
local0                                                    0        down

vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.1229 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0412 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0733 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0361 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0332 ms

Statistics: 5 sent, 5 received, 0% packet loss


However, something is missing because i started the iscsi_tgt server (below) and executed add_portal_group RPC command to add a portal group and got a bind() error below. Iscsi.conf is attached (portal group and target sections are commented out). The iscsi_tgt binary was linked to the same vpp library that I built.

[root(a)spdk2 spdk]# ./scripts/rpc.py add_portal_group 1 192.168.2.20:3260
Got JSON-RPC error response
request:
{
  "params": {
    "portals": [
      {
        "host": "192.168.2.20",
        "port": "3260"
      }
    ],
    "tag": 1
  },
  "jsonrpc": "2.0",
  "method": "add_portal_group",
  "id": 1
}
response:
{
  "message": "Invalid parameters",
  "code": -32602
}

[root(a)spdk2 spdk]# ./app/iscsi_tgt/iscsi_tgt -m 0x0101
Starting SPDK v18.04-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0x0101 --file-prefix=spdk_pid8974 ]
EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.spdk_pid8974_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 645:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x3
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 8 on socket 1
reactor.c: 429:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 318:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:953 spdk_nvme
init_grp.c: 107:spdk_iscsi_init_grp_add_initiator: *WARNING*: Please use "ANY" instead of "ALL"
init_grp.c: 108:spdk_iscsi_init_grp_add_initiator: *WARNING*: Converting "ALL" to "ANY" automatically

posix.c: 221:spdk_posix_sock_create: *ERROR*: bind() failed, errno = 99
posix.c: 231:spdk_posix_sock_create: *ERROR*: IP address 192.168.2.20 not available. Verify IP address in config file and make sure setup script is run before starting spdk app.
portal_grp.c: 190:spdk_iscsi_portal_open: *ERROR*: listen error 192.168.2.20.3260
iscsi_rpc.c: 969:spdk_rpc_add_portal_group: *ERROR*: portal_grp_open failed







From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah(a)us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hello Isaac,

Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/

SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.

Suggested flow of starting up applications is:

1.       Unbind interfaces from kernel

2.       Start VPP and configure the interface via vppctl

3.       Start SPDK

4.       Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP

Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.

Let me know if you have any questions.

Tomek

From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki(a)intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp(a)intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz(a)us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?

Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.

For VPP, first, I unbind the nick from the kernel as and start VPP application.

./usertools/dpdk-devbind.py -u 0000:07:00.0

vpp unix {cli-listen /run/vpp/cli.sock}

Unbinding the nic takes down the interface, however, the  ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface  to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."

is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?

Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?

I would appreciate if anyone would help. Thank you.


Isaac

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 25957 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-09-05 18:10 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-13 18:00 [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Isaac Otsiabah
2018-04-13 20:29 Isaac Otsiabah
2018-04-13 22:08 Isaac Otsiabah
2018-04-17 18:46 Isaac Otsiabah
2018-04-17 23:29 Isaac Otsiabah
2018-04-18  2:49 Zawadzki, Tomasz
2018-04-19 16:42 Isaac Otsiabah
2018-08-14 17:05 IOtsiabah
2018-08-14 17:15 IOtsiabah
2018-08-14 19:39 IOtsiabah
2018-08-30 22:51 IOtsiabah
2018-08-31  7:16 Zawadzki, Tomasz
2018-09-05 18:10 Edward.Yang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.