From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============8113417808193396157==" MIME-Version: 1.0 From: Zawadzki, Tomasz Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Date: Fri, 31 Aug 2018 07:16:09 +0000 Message-ID: <3FF20EF7F07495429158B858FACC0D7F3E4545EC@IRSMSX103.ger.corp.intel.com> In-Reply-To: b711e2e81d5b4c43ae0103b80a93fccb@g05usexrtxa02.g05.fujitsu.local List-ID: To: spdk@lists.01.org --===============8113417808193396157== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hello Isaac, Sorry for delayed response. Please keep in mind that the patch for switch from VCL to Session API is in= active development, with changes being applied regularly. https://review.g= errithub.io/#/c/spdk/spdk/+/417056/ We are still trying to work out best practices and recommended setup for SP= DK iSCSI target running along with VPP. Our test environments at this time consists of Fedora 26 machines, using ei= ther one or two 40GB/s interfaces per host. One host is using SPDK iSCSI ta= rget with VPP (or posix for comparison) and the other SPDK iSCSI initiator. After switch to Session API we were able to saturate a single 40GB/s interf= ace with much lower core usage in VPP compared to VCL. As well as reduce nu= mber of SPDK iSCSI target cores used in such setup. Both Session API and po= six implementation were able to saturate 40GB/s, while having similar CPU e= fficiency. We are working on evaluating higher throughputs (80GB/s and more= ), as well looking at optimizations to usage of Sessions API within SPDK. We have not seen much change from modifying most VPP config parameters from= defaults, at this time, for our setup. Keeping default num-mbufs and socke= t-mem to 1024. Mostly changing parameters regarding number of worker cores = and num-rx-queues. For iSCSI parameters, both for posix and Session API, at certain throughput= s increasing number of targets/luns within portal group were needed. We wer= e doing out comparisons at around 32-128 target/luns. Since this is evaluat= ion of iSCSI target Session API, please keep in mind that null bdevs were u= sed to eliminate other types of bdevs from affecting the results. Besides that key point for higher throughputs is having iSCSI initiator act= ually be able to generate enough traffic for iSCSI target. May I ask what kind of setup you are using for the comparisons ? Are you ta= rgeting 10GB/s interfaces as noted in previous emails ? Tomek -----Original Message----- From: IOtsiabah(a)us.fujitsu.com [mailto:IOtsiabah(a)us.fujitsu.com] = Sent: Friday, August 31, 2018 12:51 AM To: Storage Performance Development Kit ; 'Isaac Otsia= bah' ; Zawadzki, Toma= sz Cc: Verkamp, Daniel ; Edward.Yang(a)us.fujitsu.= com; PVonStamwitz(a)us.fujitsu.com; Edward.Yang(a)us.fujitsu.com Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Hi Tomasz, you probably went on vacation and are back now. Previously, I s= ent you the two emails below. Please, can you respond to them for us? Thank= you. Isaac -----Original Message----- From: Otsiabah, Isaac = Sent: Tuesday, August 14, 2018 12:40 PM To: Storage Performance Development Kit ; 'Isaac Otsia= bah'; Zawadzki, Tomasz Cc: Verkamp, Daniel ; Yang, Edward ; Von-Stamwitz, Paul ; Yan= g, Edward Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Tomasz, we can increased the amount of hugepages used by vpp by increasing = the dpdk parameters socket-mem 1024 num-mbufs 65536 = however, there were no improvement in fio performance test results. We are = running our test on Centos 7. Are you testing vpp on Fedora instead? Can yo= u share with us your test environment information? Isaac/Edward -----Original Message----- From: Otsiabah, Isaac = Sent: Tuesday, August 14, 2018 10:15 AM To: Storage Performance Development Kit ; 'Isaac Otsia= bah'; Zawadzki, Tomasz Cc: Verkamp, Daniel ; Yang, Edward ; Von-Stamwitz, Paul ; Yan= g, Edward ; Otsiabah, Isaac Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Hi Tomasz, we obtained your vpp patch 417056 (git fetch https://review.gerr= ithub.io/spdk/spdk refs/changes/56/417056/16:test16) We are testing it and = have a few questions. 1.. Please, can you share with us your test results and your test environme= nt setup or configuration? 2. From experiment, we see that vpp always uses 105 pages from the avail= able hugepages in the system regardless of the amount available. Is there a= way to increase the amount of huge pages for vpp? Isaac From: Isaac Otsiabah Sent: Tuesday, April 17, 2018 11:46 AM To: 'Zawadzki, Tomasz' ; 'spdk(a)lists.01.org'= Cc: Harris, James R ; Verkamp, Daniel ; Paul Von-Stamwitz Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Hi Tomasz, I got the SPDK patch. My network topology is simple but making t= he network ip address accessible to the iscsi_tgt application and to vpp is= not working. From my understanding, vpp is started first on the target hos= t and then iscsi_tgt application is started after the network setup is done= (please, correct me if this is not the case). ------- 192.168.2.10 | | initiator ------- | | | -------------------------------------------- 192.168.2.0 | | | 192.168.2.20 -------------- vpp, vppctl | | iscsi_tgt -------------- Both system have a 10GB NIC (On target Server): I set up the vpp environment variables through sysctl command. I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for t= he first 10GB NIC (device address=3D 0000:82:00.0). That worked so I started the vpp application and from the startup output, t= he NIC is in used by vpp [root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists) load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development = Kit (DPDK)) load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet) load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U) load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addre= ssing for IPv6) load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM) load_one_plugin:114: Plugin disabled (default): ixge_plugin.so load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data pl= ane) load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation) load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer) load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployme= nt on IPv4 Infrastructure (RFC5969)) load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interfac= e (experimetal)) load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translat= ion) load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE) load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for C= ontainer integration) vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /acl_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /dpdk_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /flowprobe_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /gtpu_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /ioam_export_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /ioam_pot_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /ioam_trace_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /ioam_vxlan_gpe_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /kubeproxy_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /lb_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /memif_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /nat_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /pppoe_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /udp_ping_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins= /vxlan_gpe_ioam_export_test_plugin.so vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/h= ugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64= ,64 EAL: No free hugepages reported in hugepages-1048576kB EAL: VFIO support initialized DPDK physical memory layout: Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, h= ugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: IOVA:0x3e000000, len:167= 77216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, n= rank:0 Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket= _id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 3: IOVA:0x54c00000,= len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchan= nel:0, nrank:0 Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200= 000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran STEP1: Then from vppctl command prompt, I set up ip address for the 10G interface = and up it. From vpp, I can ping the initiator machine and vice versa as sho= wn below. vpp# show int Name Idx State Counter = Count TenGigabitEthernet82/0/0 1 down local0 0 down vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24 vpp#= set interface state TenGigabitEthernet82/0/0 up vpp# show int Name Idx State Counter = Count TenGigabitEthernet82/0/0 1 up local0 0 down vpp# show int address TenGigabitEthernet82/0/0 (up): 192.168.2.20/24 local0 (dn): /* ping initiator from vpp */ vpp# ping 192.168.2.10 64 bytes from 192.168.2.10: icmp_seq=3D1 ttl=3D64 time=3D.0779 ms 64 bytes from 192.168.2.10: icmp_seq=3D2 ttl=3D64 time=3D.0396 ms 64 bytes from 192.168.2.10: icmp_seq=3D3 ttl=3D64 time=3D.0316 ms 64 bytes from 192.168.2.10: icmp_seq=3D4 ttl=3D64 time=3D.0368 ms 64 bytes from 192.168.2.10: icmp_seq=3D5 ttl=3D64 time=3D.0327 ms (On Initiator): /* ping vpp interface from initiator*/ [root(a)spdk1 ~]# ping -c 2 192.168.2.20 PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data. 64 bytes from 192.168.2.20: icmp_seq=3D1 ttl=3D64 time=3D0.038 ms 64 bytes from 192.168.2.20: icmp_seq=3D2 ttl=3D64 time=3D0.031 ms STEP2: However, when I start the iscsi_tgt server, it does not have access to the = above 192.168.2.x subnet so I ran these commands on the target server to cr= eate veth and then connected it to a vpp host-interface as follows: ip link add name vpp1out type veth peer name vpp1host ip link set dev vpp1o= ut up ip link set dev vpp1host up ip addr add 192.168.2.201/24 dev vpp1host vpp# create host-interface name vpp1out vpp# set int state host-vpp1out up vpp# set int ip address host-vpp1out 192.168.2.202 vpp# show int addr TenGigabitEthernet82/0/0 (up): 192.168.2.20/24 host-vpp1out (up): 192.168.2.202/24 local0 (dn): vpp# trace add af-packet-input 10 /* From host, ping vpp */ [root(a)spdk2 ~]# ping -c 2 192.168.2.202 PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data. 64 bytes from 192.168.2.202: icmp_seq=3D1 ttl=3D64 time=3D0.130 ms 64 bytes from 192.168.2.202: icmp_seq=3D2 ttl=3D64 time=3D0.067 ms /* From vpp, ping host */ vpp# ping 192.168.2.201 64 bytes from 192.168.2.201: icmp_seq=3D1 ttl=3D64 time=3D.1931 ms 64 bytes from 192.168.2.201: icmp_seq=3D2 ttl=3D64 time=3D.1581 ms 64 bytes from 192.168.2.201: icmp_seq=3D3 ttl=3D64 time=3D.1235 ms 64 bytes from 192.168.2.201: icmp_seq=3D4 ttl=3D64 time=3D.1032 ms 64 bytes from 192.168.2.201: icmp_seq=3D5 ttl=3D64 time=3D.0688 ms Statistics: 5 sent, 5 received, 0% packet loss >From the target host,I still cannot ping the initiator (192.168.2.10), it = does not go through the vpp interface so my vpp interface connection is not= correct. Please, how does one create the vpp host interface and connect it, so that = host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? = In STEP2, should I use a different subnet like 192.168.3.X and turn on IP f= orwarding add a route to the routing table? Isaac From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com] Sent: Thursday, April 12, 2018 12:27 AM To: Isaac Otsiabah > Cc: Harris, James R >; Verkamp, Daniel >; Paul Von-Stamwitz > Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Hello Isaac, Are you using following patch ? (I suggest cherry picking it) https://urlde= fense.proofpoint.com/v2/url?u=3Dhttps-3A__review.gerrithub.io_-23_c_389566_= &d=3DDwICAg&c=3D09aR81AqZjK9FqV5BSCPBw&r=3Dng2zwXQCOCuiUsKl9hqVJf45sjO6jr46= tTh5PBLLcls&m=3D5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=3DFE90i1g4fLq= z2TZ_eM5V21BWuBXg2eB7L18qpVk7DSM&e=3D SPDK iSCSI target can be started without specific interface to bind on, by = not specifying any target nodes or portal groups. They can be added later v= ia RPC https://urldefense.proofpoint.com/v2/url?u=3Dhttp-3A__www.spdk.io_do= c_iscsi.html-23iscsi-5Frpc&d=3DDwICAg&c=3D09aR81AqZjK9FqV5BSCPBw&r=3Dng2zwX= QCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=3D5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUT= A2lTbCWuTg&s=3DKFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=3D. Please see https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__github.co= m_spdk_spdk_blob_master_test_iscsi-5Ftgt_lvol_iscsi.conf&d=3DDwICAg&c=3D09a= R81AqZjK9FqV5BSCPBw&r=3Dng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=3D5e7= YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=3DjSKH9IX5rn3DlmRDFR35I4V5I-bT1x= xWSqSp1pIXygw&e=3D for example of minimal iSCSI config. Suggested flow of starting up applications is: 1. Unbind interfaces from kernel 2. Start VPP and configure the interface via vppctl 3. Start SPDK 4. Configure the iSCSI target via RPC, at this time it should be poss= ible to use the interface configured in VPP Please note, there is some leeway here. The only requirement is having VPP = app started before SPDK app. Interfaces in VPP can be created (like tap or veth) and configured at runti= me, and are available for use in SPDK as well. Let me know if you have any questions. Tomek From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com] Sent: Wednesday, April 11, 2018 8:47 PM To: Zawadzki, Tomasz > Cc: Harris, James R >; Verkamp, Daniel >; Paul Von-Stamwitz > Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Cen= tos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi= _tgt application. For VPP, first, I unbind the nick from the kernel as and start VPP applicat= ion. ./usertools/dpdk-devbind.py -u 0000:07:00.0 vpp unix {cli-listen /run/vpp/cli.sock} Unbinding the nic takes down the interface, however, the ./app/iscsi_tgt/i= scsi_tgt -m 0x101 application needs an interface to bind to during startup= so it fails to start. The information at: "Running SPDK with VPP VPP application has to be started before SPDK iSCSI target, in order to ena= ble usage of network interfaces. After SPDK iSCSI target initialization fin= ishes, interfaces configured within VPP will be available to be configured = as portal addresses. Please refer to Configuring iSCSI Target via RPC metho= d." is not clear because the instructions at the "Configuring iSCSI Traget via = RPC method" suggest the iscsi_tgt server is running for one to be able to e= xecute the RPC commands but, how do I get the iscsi_tgt server running with= out an interface to bind on during its initialization? Please, can anyone of you help to explain how to run the SPDK iscsi_tgt app= lication with VPP (for instance, what should change in iscsi.conf?) after u= nbinding the nic, how do I get the iscsi_tgt server to start without an int= erface to bind to, what address should be assigned to the Portal in iscsi.c= onf.. etc)? I would appreciate if anyone would help. Thank you. Isaac _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__lists.01.org_mailman= _listinfo_spdk&d=3DDwICAg&c=3D09aR81AqZjK9FqV5BSCPBw&r=3Dng2zwXQCOCuiUsKl9h= qVJf45sjO6jr46tTh5PBLLcls&m=3D5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s= =3D2iHpVGzaloMHLL179exqyisY-BLZOoEFh5Y4Z7SArYs&e=3D --===============8113417808193396157==--