From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============2365516979844222668==" MIME-Version: 1.0 From: Edward.Yang at us.fujitsu.com Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Date: Wed, 05 Sep 2018 18:10:33 +0000 Message-ID: <0c66559bcf3e4b3e8400d666dd21428c@g05usexrtxa02.g05.fujitsu.local> In-Reply-To: 3FF20EF7F07495429158B858FACC0D7F3E4545EC@IRSMSX103.ger.corp.intel.com List-ID: To: spdk@lists.01.org --===============2365516979844222668== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Tomasz, Thank you very much for response. = Now, Isaac is on vacation. You said: >One host is using SPDK iSCSI target with VPP (or posix for comparison) and= the other SPDK iSCSI initiator. Is there SPDK iSCSI initiator? (In our test, we setup iSCSI initiator witho= ut SPDK installed on the initiator machine). = You said: >Since this is evaluation of iSCSI target Session API, please keep in mind = that null bdevs were used to eliminate other types of bdevs from affecting = the results. About the null bdevs that you mentioned, did you mean that they are Malloc = bdev devices? The following is how we setup for iSCSI target with VPP. = Please see if our procedures are fine. = On iSCSI target (CentOS 7.4): =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D VPP was configured through /etc/vpp/startup.conf as: cpu { ... main-core 1 ... corelist-workers 2-5 ... workers 4 } # One 10G NIC with its PCI address and parameters: dev 0000:82:00.0 { num-rx-queues 4 num-rx-desc 1024 } Then, we start VPP and set the interface as below: ifdown enp130s0f0 /root/spdk_vpp_pr/spdk/dpdk/usertools/dpdk-devbind.py --bind=3Duio_pci_gene= ric 82:00.0 systemctl start vpp vppctl set interface state TenGigabitEthernet82/0/0 up vppctl set interface ip address TenGigabitEthernet82/0/0 192.168.2.10/24 And, we start iSCSI target and construct 4 malloc block devices, each 256MB= and 512B sector size, as: /root/spdk_vpp_pr/spdk/app/iscsi_tgt/iscsi_tgt -m 0x01 -c /usr/local/etc/sp= dk/iscsi.conf python /root/spdk_vpp_pr/spdk/scripts/rpc.py add_portal_group 1 192.168.2.1= 0:3260 python /root/spdk_vpp_pr/spdk/scripts/rpc.py add_initiator_group 2 ANY 192.= 168.2.50/24 python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Mallo= c0 256 512 python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Mallo= c1 256 512 python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Mallo= c2 256 512 python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Mallo= c3 256 512 python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_target_node disk1 "D= ata Disk1" "Malloc0:0 Malloc1:1 Malloc2:2 Malloc3:3" 1:2 64 -d On iSCSI initiator (CentOS 7.4): iscsiadm -m discovery -t sendtargets -p 192.168.2.10 iscsiadm -m node --login (after login, /dev/sdd - /dev/sdg are added) Then, we run fio through the following job file, fio_vpp_randread.txt, as: [global] ioengine=3Dlibaio direct=3D1 ramp_time=3D15 runtime=3D60 iodepth=3D32 randrepeat=3D0 bs=3D4K group_reporting time_based [job1] rw=3Drandread filename=3D/dev/sdd name=3Draw-random-read [job2] rw=3Drandread filename=3D/dev/sde name=3Draw-random-read [job3] rw=3Drandread filename=3D/dev/sdf name=3Draw-random-read [job4] rw=3Drandread filename=3D/dev/sdg name=3Draw-random-read The fio job report: [root(a)gluster3 ~]# fio fio_vpp_randread.txt raw-random-read: (g=3D0): rw=3Drandread, bs=3D(R) 4096B-4096B, (W) 4096B-40= 96B, (T) 4096B-4096B, ioengine=3Dlibaio, iodepth=3D32 raw-random-read: (g=3D0): rw=3Drandread, bs=3D(R) 4096B-4096B, (W) 4096B-40= 96B, (T) 4096B-4096B, ioengine=3Dlibaio, iodepth=3D32 raw-random-read: (g=3D0): rw=3Drandread, bs=3D(R) 4096B-4096B, (W) 4096B-40= 96B, (T) 4096B-4096B, ioengine=3Dlibaio, iodepth=3D32 raw-random-read: (g=3D0): rw=3Drandread, bs=3D(R) 4096B-4096B, (W) 4096B-40= 96B, (T) 4096B-4096B, ioengine=3Dlibaio, iodepth=3D32 fio-3.1 Starting 4 processes Jobs: 4 (f=3D4): [r(4)][100.0%][r=3D338MiB/s,w=3D0KiB/s][r=3D86.6k,w=3D0 IO= PS][eta 00m:00s] = raw-random-read: (groupid=3D0, jobs=3D4): err=3D 0: pid=3D5355: Wed Sep 5 = 12:20:11 2018 read: IOPS=3D89.2k, BW=3D348MiB/s (365MB/s)(20.4GiB/60002msec) slat (nsec): min=3D1543, max=3D1054.4k, avg=3D9189.80, stdev=3D11823.44 clat (usec): min=3D304, max=3D8993, avg=3D1422.99, stdev=3D526.70 lat (usec): min=3D318, max=3D9000, avg=3D1432.50, stdev=3D525.91 clat percentiles (usec): | 1.00th=3D[ 652], 5.00th=3D[ 799], 10.00th=3D[ 898], 20.00th=3D[= 1004], | 30.00th=3D[ 1106], 40.00th=3D[ 1205], 50.00th=3D[ 1303], 60.00th=3D[= 1434], | 70.00th=3D[ 1582], 80.00th=3D[ 1778], 90.00th=3D[ 2114], 95.00th=3D[= 2409], | 99.00th=3D[ 3163], 99.50th=3D[ 3556], 99.90th=3D[ 4490], 99.95th=3D[= 4883], | 99.99th=3D[ 5735] bw ( KiB/s): min=3D68987, max=3D134923, per=3D25.08%, avg=3D89483.28, s= tdev=3D14270.72, samples=3D480 iops : min=3D17246, max=3D33730, avg=3D22370.52, stdev=3D3567.67,= samples=3D480 lat (usec) : 500=3D0.02%, 750=3D3.37%, 1000=3D15.86% lat (msec) : 2=3D68.19%, 4=3D12.33%, 10=3D0.22% cpu : usr=3D6.75%, sys=3D20.85%, ctx=3D2226569, majf=3D0, minf= =3D749 IO depths : 1=3D0.1%, 2=3D0.1%, 4=3D0.1%, 8=3D0.1%, 16=3D0.1%, 32=3D12= 7.2%, >=3D64=3D0.0% submit : 0=3D0.0%, 4=3D100.0%, 8=3D0.0%, 16=3D0.0%, 32=3D0.0%, 64= =3D0.0%, >=3D64=3D0.0% complete : 0=3D0.0%, 4=3D100.0%, 8=3D0.0%, 16=3D0.0%, 32=3D0.1%, 64= =3D0.0%, >=3D64=3D0.0% issued rwt: total=3D5352130,0,0, short=3D0,0,0, dropped=3D0,0,0 latency : target=3D0, window=3D0, percentile=3D100.00%, depth=3D32 Run status group 0 (all jobs): READ: bw=3D348MiB/s (365MB/s), 348MiB/s-348MiB/s (365MB/s-365MB/s), io= =3D20.4GiB (21.9GB), run=3D60002-60002msec Disk stats (read/write): sdd: ios=3D1732347/0, merge=3D19057/0, ticks=3D2248152/0, in_queue=3D2247= 789, util=3D99.91% sde: ios=3D1558604/0, merge=3D18701/0, ticks=3D2314201/0, in_queue=3D2314= 047, util=3D99.92% sdf: ios=3D1719023/0, merge=3D19309/0, ticks=3D2257093/0, in_queue=3D2256= 728, util=3D99.94% sdg: ios=3D1723574/0, merge=3D18756/0, ticks=3D2253409/0, in_queue=3D2253= 174, util=3D99.95% Thanks for any advices. Regards, Edward >-----Original Message----- >From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com] >Sent: Friday, August 31, 2018 12:16 AM >To: Otsiabah, Isaac ; Storage Performance >Development Kit ; 'Isaac Otsiabah' >Cc: Yang, Edward ; Von-Stamwitz, Paul >; Yang, Edward >Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? > >Hello Isaac, > >Sorry for delayed response. > >Please keep in mind that the patch for switch from VCL to Session API is in >active development, with changes being applied regularly. >https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__review.gerrithub.io= _- >23_c_spdk_spdk_-2B_417056_&d=3DDwIFAg&c=3D09aR81AqZjK9FqV5BSCPBw&r=3D- >Vl2krtpQKVbTcyqQsihSQLZVp3NEoqJsxIT3rBIgJk&m=3DzBi2rfFfwt1hJFK1mCXeH7 >BK1b9GjLxSZ8CEmjaP- >Dk&s=3D0ggEYmwl4WBiSgplNGj9Cqya9bDKmXoQJubuybSiCdg&e=3D >We are still trying to work out best practices and recommended setup for S= PDK >iSCSI target running along with VPP. > >Our test environments at this time consists of Fedora 26 machines, using e= ither >one or two 40GB/s interfaces per host. One host is using SPDK iSCSI target= with >VPP (or posix for comparison) and the other SPDK iSCSI initiator. >After switch to Session API we were able to saturate a single 40GB/s inter= face >with much lower core usage in VPP compared to VCL. As well as reduce >number of SPDK iSCSI target cores used in such setup. Both Session API and >posix implementation were able to saturate 40GB/s, while having similar CPU >efficiency. We are working on evaluating higher throughputs (80GB/s and >more), as well looking at optimizations to usage of Sessions API within SP= DK. > >We have not seen much change from modifying most VPP config parameters >from defaults, at this time, for our setup. Keeping default num-mbufs and >socket-mem to 1024. Mostly changing parameters regarding number of worker >cores and num-rx-queues. >For iSCSI parameters, both for posix and Session API, at certain throughpu= ts >increasing number of targets/luns within portal group were needed. We were >doing out comparisons at around 32-128 target/luns. Since this is evaluati= on of >iSCSI target Session API, please keep in mind that null bdevs were used to >eliminate other types of bdevs from affecting the results. >Besides that key point for higher throughputs is having iSCSI initiator ac= tually >be able to generate enough traffic for iSCSI target. > >May I ask what kind of setup you are using for the comparisons ? Are you >targeting 10GB/s interfaces as noted in previous emails ? > >Tomek > >-----Original Message----- >From: IOtsiabah(a)us.fujitsu.com [mailto:IOtsiabah(a)us.fujitsu.com] >Sent: Friday, August 31, 2018 12:51 AM >To: Storage Performance Development Kit ; 'Isaac >Otsiabah' _O=3DFMSA_OU=3DEXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23 >SPDLT+29_CN=3DRECIPIENTS_CN=3DIOTSIABAH(a)fujitsu.local>; Zawadzki, Tomasz > >Cc: Verkamp, Daniel ; >Edward.Yang(a)us.fujitsu.com; PVonStamwitz(a)us.fujitsu.com; >Edward.Yang(a)us.fujitsu.com >Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? > >Hi Tomasz, you probably went on vacation and are back now. Previously, I >sent you the two emails below. Please, can you respond to them for us? Tha= nk >you. > >Isaac > >-----Original Message----- >From: Otsiabah, Isaac >Sent: Tuesday, August 14, 2018 12:40 PM >To: Storage Performance Development Kit ; 'Isaac >Otsiabah'; Zawadzki, Tomasz >Cc: Verkamp, Daniel ; Yang, Edward >; Von-Stamwitz, Paul >; Yang, Edward >Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? > >Tomasz, we can increased the amount of hugepages used by vpp by increasing >the dpdk parameters > > socket-mem 1024 > num-mbufs 65536 > >however, there were no improvement in fio performance test results. We are >running our test on Centos 7. Are you testing vpp on Fedora instead? Can y= ou >share with us your test environment information? > >Isaac/Edward > >-----Original Message----- >From: Otsiabah, Isaac >Sent: Tuesday, August 14, 2018 10:15 AM >To: Storage Performance Development Kit ; 'Isaac >Otsiabah'; Zawadzki, Tomasz >Cc: Verkamp, Daniel ; Yang, Edward >; Von-Stamwitz, Paul >; Yang, Edward >; Otsiabah, Isaac >Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? > >Hi Tomasz, we obtained your vpp patch 417056 (git fetch >https://urldefense.proofpoint.com/v2/url?u=3Dhttps- >3A__review.gerrithub.io_spdk_spdk&d=3DDwIFAg&c=3D09aR81AqZjK9FqV5BSCPBw >&r=3D- >Vl2krtpQKVbTcyqQsihSQLZVp3NEoqJsxIT3rBIgJk&m=3DzBi2rfFfwt1hJFK1mCXeH7 >BK1b9GjLxSZ8CEmjaP- >Dk&s=3DJe6FxQzCr9VtkhD7ayQzCaSkQp_yXz_166aHDjWwAOA&e=3D >refs/changes/56/417056/16:test16) We are testing it and have a few questio= ns. > >1.. Please, can you share with us your test results and your test environm= ent >setup or configuration? > >2. From experiment, we see that vpp always uses 105 pages from the >available hugepages in the system regardless of the amount available. Is t= here >a way to increase the amount of huge pages for vpp? > >Isaac > >From: Isaac Otsiabah >Sent: Tuesday, April 17, 2018 11:46 AM >To: 'Zawadzki, Tomasz' ; 'spdk(a)lists.01.org' > >Cc: Harris, James R ; Verkamp, Daniel >; Paul Von-Stamwitz > >Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? > >Hi Tomasz, I got the SPDK patch. My network topology is simple but making >the network ip address accessible to the iscsi_tgt application and to vpp = is not >working. From my understanding, vpp is started first on the target host and >then iscsi_tgt application is started after the network setup is done (ple= ase, >correct me if this is not the case). > > > ------- 192.168.2.10 > | | initiator > ------- > | > | > | >-------------------------------------------- 192.168.2.0 > | > | > | 192.168.2.20 > -------------- vpp, vppctl > | | iscsi_tgt > -------------- > >Both system have a 10GB NIC > >(On target Server): >I set up the vpp environment variables through sysctl command. >I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for = the >first 10GB NIC (device address=3D 0000:82:00.0). >That worked so I started the vpp application and from the startup output, = the >NIC is in used by vpp > >[root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf >vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins >load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists) >load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development >Kit (DPDK)) >load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet) >load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U) >load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator >addressing for IPv6) >load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM) >load_one_plugin:114: Plugin disabled (default): ixge_plugin.so >load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data >plane) >load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation) >load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer) >load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid >Deployment on IPv4 Infrastructure (RFC5969)) >load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory >Interface (experimetal)) >load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address >Translation) >load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE) >load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for >Container integration) >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/acl_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/lb_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/memif_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/nat_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so >vpp[4168]: load_one_plugin:63: Loaded plugin: >/usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so >vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir >/run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --so= cket- >mem 64,64 >EAL: No free hugepages reported in hugepages-1048576kB >EAL: VFIO support initialized >DPDK physical memory layout: >Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, >hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: IOVA:0x3e000000, >len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, >nchannel:0, nrank:0 Segment 2: IOVA:0x3fc00000, len:2097152, >virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 >Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, >socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4: >IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, >hugepage_sz:2097152, nchannel:0, nran > >STEP1: >Then from vppctl command prompt, I set up ip address for the 10G interface >and up it. From vpp, I can ping the initiator machine and vice versa as sh= own >below. > >vpp# show int > Name Idx State Counter = Count >TenGigabitEthernet82/0/0 1 down >local0 0 down >vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24 vpp# >set interface state TenGigabitEthernet82/0/0 up vpp# show int > Name Idx State Counter = Count >TenGigabitEthernet82/0/0 1 up >local0 0 down >vpp# show int address >TenGigabitEthernet82/0/0 (up): > 192.168.2.20/24 >local0 (dn): > >/* ping initiator from vpp */ > >vpp# ping 192.168.2.10 >64 bytes from 192.168.2.10: icmp_seq=3D1 ttl=3D64 time=3D.0779 ms >64 bytes from 192.168.2.10: icmp_seq=3D2 ttl=3D64 time=3D.0396 ms >64 bytes from 192.168.2.10: icmp_seq=3D3 ttl=3D64 time=3D.0316 ms >64 bytes from 192.168.2.10: icmp_seq=3D4 ttl=3D64 time=3D.0368 ms >64 bytes from 192.168.2.10: icmp_seq=3D5 ttl=3D64 time=3D.0327 ms > >(On Initiator): >/* ping vpp interface from initiator*/ >[root(a)spdk1 ~]# ping -c 2 192.168.2.20 >PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data. >64 bytes from 192.168.2.20: icmp_seq=3D1 ttl=3D64 time=3D0.038 ms >64 bytes from 192.168.2.20: icmp_seq=3D2 ttl=3D64 time=3D0.031 ms > >STEP2: >However, when I start the iscsi_tgt server, it does not have access to the= above >192.168.2.x subnet so I ran these commands on the target server to create >veth and then connected it to a vpp host-interface as follows: > >ip link add name vpp1out type veth peer name vpp1host ip link set dev >vpp1out up ip link set dev vpp1host up ip addr add 192.168.2.201/24 dev >vpp1host > >vpp# create host-interface name vpp1out >vpp# set int state host-vpp1out up >vpp# set int ip address host-vpp1out 192.168.2.202 vpp# show int addr >TenGigabitEthernet82/0/0 (up): > 192.168.2.20/24 >host-vpp1out (up): > 192.168.2.202/24 >local0 (dn): >vpp# trace add af-packet-input 10 > > >/* From host, ping vpp */ > >[root(a)spdk2 ~]# ping -c 2 192.168.2.202 >PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data. >64 bytes from 192.168.2.202: icmp_seq=3D1 ttl=3D64 time=3D0.130 ms >64 bytes from 192.168.2.202: icmp_seq=3D2 ttl=3D64 time=3D0.067 ms > >/* From vpp, ping host */ >vpp# ping 192.168.2.201 >64 bytes from 192.168.2.201: icmp_seq=3D1 ttl=3D64 time=3D.1931 ms >64 bytes from 192.168.2.201: icmp_seq=3D2 ttl=3D64 time=3D.1581 ms >64 bytes from 192.168.2.201: icmp_seq=3D3 ttl=3D64 time=3D.1235 ms >64 bytes from 192.168.2.201: icmp_seq=3D4 ttl=3D64 time=3D.1032 ms >64 bytes from 192.168.2.201: icmp_seq=3D5 ttl=3D64 time=3D.0688 ms > >Statistics: 5 sent, 5 received, 0% packet loss > >From the target host,I still cannot ping the initiator (192.168.2.10), it = does not >go through the vpp interface so my vpp interface connection is not correct. > >Please, how does one create the vpp host interface and connect it, so that= host >applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In S= TEP2, >should I use a different subnet like 192.168.3.X and turn on IP forwarding= add a >route to the routing table? > >Isaac > >From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com] >Sent: Thursday, April 12, 2018 12:27 AM >To: Isaac Otsiabah >> >Cc: Harris, James R >>; Verkamp, >Daniel >; Pa= ul >Von-Stamwitz >> >Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? > >Hello Isaac, > >Are you using following patch ? (I suggest cherry picking it) >https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__review.gerrithub.io= _- >23_c_389566_&d=3DDwICAg&c=3D09aR81AqZjK9FqV5BSCPBw&r=3Dng2zwXQCOCuiU >sKl9hqVJf45sjO6jr46tTh5PBLLcls&m=3D5e7YYsGGsVeGWugYxlqcFCeU- >xkYWjIUTA2lTbCWuTg&s=3DFE90i1g4fLqz2TZ_eM5V21BWuBXg2eB7L18qpVk7DS >M&e=3D > >SPDK iSCSI target can be started without specific interface to bind on, by= not >specifying any target nodes or portal groups. They can be added later via = RPC >https://urldefense.proofpoint.com/v2/url?u=3Dhttp- >3A__www.spdk.io_doc_iscsi.html-23iscsi- >5Frpc&d=3DDwICAg&c=3D09aR81AqZjK9FqV5BSCPBw&r=3Dng2zwXQCOCuiUsKl9hqVJ >f45sjO6jr46tTh5PBLLcls&m=3D5e7YYsGGsVeGWugYxlqcFCeU- >xkYWjIUTA2lTbCWuTg&s=3DKFyVzoGGQQYWVZZkv1DNAelTF- >h5zZerTcOn1D9wfxM&e=3D. >Please see https://urldefense.proofpoint.com/v2/url?u=3Dhttps- >3A__github.com_spdk_spdk_blob_master_test_iscsi- >5Ftgt_lvol_iscsi.conf&d=3DDwICAg&c=3D09aR81AqZjK9FqV5BSCPBw&r=3Dng2zwXQC >OCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=3D5e7YYsGGsVeGWugYxlqcFCeU- >xkYWjIUTA2lTbCWuTg&s=3DjSKH9IX5rn3DlmRDFR35I4V5I- >bT1xxWSqSp1pIXygw&e=3D for example of minimal iSCSI config. > >Suggested flow of starting up applications is: > >1. Unbind interfaces from kernel > >2. Start VPP and configure the interface via vppctl > >3. Start SPDK > >4. Configure the iSCSI target via RPC, at this time it should be pos= sible to >use the interface configured in VPP > >Please note, there is some leeway here. The only requirement is having VPP >app started before SPDK app. >Interfaces in VPP can be created (like tap or veth) and configured at runt= ime, >and are available for use in SPDK as well. > >Let me know if you have any questions. > >Tomek > >From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com] >Sent: Wednesday, April 11, 2018 8:47 PM >To: Zawadzki, Tomasz >> >Cc: Harris, James R >>; Verkamp, >Daniel >; Pa= ul >Von-Stamwitz >> >Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? > >Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Ce= ntos >7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt >application. > >For VPP, first, I unbind the nick from the kernel as and start VPP applica= tion. > >./usertools/dpdk-devbind.py -u 0000:07:00.0 > >vpp unix {cli-listen /run/vpp/cli.sock} > >Unbinding the nic takes down the interface, however, >the ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface to= bind >to during startup so it fails to start. The information at: >"Running SPDK with VPP >VPP application has to be started before SPDK iSCSI target, in order to en= able >usage of network interfaces. After SPDK iSCSI target initialization finish= es, >interfaces configured within VPP will be available to be configured as por= tal >addresses. Please refer to Configuring iSCSI Target via RPC >method3A__www.spdk.io_doc_iscsi.html-23iscsi- >5Frpc&d=3DDwICAg&c=3D09aR81AqZjK9FqV5BSCPBw&r=3Dng2zwXQCOCuiUsKl9hqVJ >f45sjO6jr46tTh5PBLLcls&m=3D5e7YYsGGsVeGWugYxlqcFCeU- >xkYWjIUTA2lTbCWuTg&s=3DKFyVzoGGQQYWVZZkv1DNAelTF- >h5zZerTcOn1D9wfxM&e=3D>." > >is not clear because the instructions at the "Configuring iSCSI Traget via= RPC >method" suggest the iscsi_tgt server is running for one to be able to exec= ute >the RPC commands but, how do I get the iscsi_tgt server running without an >interface to bind on during its initialization? > >Please, can anyone of you help to explain how to run the SPDK iscsi_tgt >application with VPP (for instance, what should change in iscsi.conf?) aft= er >unbinding the nic, how do I get the iscsi_tgt server to start without an i= nterface >to bind to, what address should be assigned to the Portal in iscsi.conf.. = etc)? > >I would appreciate if anyone would help. Thank you. > > >Isaac >_______________________________________________ >SPDK mailing list >SPDK(a)lists.01.org >https://urldefense.proofpoint.com/v2/url?u=3Dhttps- >3A__lists.01.org_mailman_listinfo_spdk&d=3DDwICAg&c=3D09aR81AqZjK9FqV5BSC >PBw&r=3Dng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=3D5e7YYsGGsVeG >WugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=3D2iHpVGzaloMHLL179exqyisY- >BLZOoEFh5Y4Z7SArYs&e=3D --===============2365516979844222668==--