Tomasz, we can increased the amount of hugepages used by vpp by increasing the dpdk parameters socket-mem 1024 num-mbufs 65536 however, there were no improvement in fio performance test results. We are running our test on Centos 7. Are you testing vpp on Fedora instead? Can you share with us your test environment information? Isaac/Edward -----Original Message----- From: Otsiabah, Isaac Sent: Tuesday, August 14, 2018 10:15 AM To: Storage Performance Development Kit ; 'Isaac Otsiabah'; Zawadzki, Tomasz Cc: Verkamp, Daniel ; Yang, Edward ; Von-Stamwitz, Paul ; Yang, Edward ; Otsiabah, Isaac Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Hi Tomasz, we obtained your vpp patch 417056 (git fetch https://review.gerrithub.io/spdk/spdk refs/changes/56/417056/16:test16) We are testing it and have a few questions. 1.. Please, can you share with us your test results and your test environment setup or configuration? 2. From experiment, we see that vpp always uses 105 pages from the available hugepages in the system regardless of the amount available. Is there a way to increase the amount of huge pages for vpp? Isaac From: Isaac Otsiabah Sent: Tuesday, April 17, 2018 11:46 AM To: 'Zawadzki, Tomasz' ; 'spdk(a)lists.01.org' Cc: Harris, James R ; Verkamp, Daniel ; Paul Von-Stamwitz Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case). ------- 192.168.2.10 | | initiator ------- | | | -------------------------------------------- 192.168.2.0 | | | 192.168.2.20 -------------- vpp, vppctl | | iscsi_tgt -------------- Both system have a 10GB NIC (On target Server): I set up the vpp environment variables through sysctl command. I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first 10GB NIC (device address= 0000:82:00.0). That worked so I started the vpp application and from the startup output, the NIC is in used by vpp [root(a)spdk2 ~]# vpp -c /etc/vpp/startup.conf vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists) load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK)) load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet) load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U) load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6) load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM) load_one_plugin:114: Plugin disabled (default): ixge_plugin.so load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane) load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation) load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer) load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969)) load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal)) load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation) load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE) load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration) vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64 EAL: No free hugepages reported in hugepages-1048576kB EAL: VFIO support initialized DPDK physical memory layout: Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran STEP1: Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below. vpp# show int Name Idx State Counter Count TenGigabitEthernet82/0/0 1 down local0 0 down vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24 vpp# set interface state TenGigabitEthernet82/0/0 up vpp# show int Name Idx State Counter Count TenGigabitEthernet82/0/0 1 up local0 0 down vpp# show int address TenGigabitEthernet82/0/0 (up): 192.168.2.20/24 local0 (dn): /* ping initiator from vpp */ vpp# ping 192.168.2.10 64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms 64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms 64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms 64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms 64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms (On Initiator): /* ping vpp interface from initiator*/ [root(a)spdk1 ~]# ping -c 2 192.168.2.20 PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data. 64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms 64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms STEP2: However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows: ip link add name vpp1out type veth peer name vpp1host ip link set dev vpp1out up ip link set dev vpp1host up ip addr add 192.168.2.201/24 dev vpp1host vpp# create host-interface name vpp1out vpp# set int state host-vpp1out up vpp# set int ip address host-vpp1out 192.168.2.202 vpp# show int addr TenGigabitEthernet82/0/0 (up): 192.168.2.20/24 host-vpp1out (up): 192.168.2.202/24 local0 (dn): vpp# trace add af-packet-input 10 /* From host, ping vpp */ [root(a)spdk2 ~]# ping -c 2 192.168.2.202 PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data. 64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms 64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms /* From vpp, ping host */ vpp# ping 192.168.2.201 64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms 64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms 64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms 64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms 64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms Statistics: 5 sent, 5 received, 0% packet loss From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct. Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table? Isaac From: Zawadzki, Tomasz [mailto:tomasz.zawadzki(a)intel.com] Sent: Thursday, April 12, 2018 12:27 AM To: Isaac Otsiabah > Cc: Harris, James R >; Verkamp, Daniel >; Paul Von-Stamwitz > Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Hello Isaac, Are you using following patch ? (I suggest cherry picking it) https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_-23_c_389566_&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=FE90i1g4fLqz2TZ_eM5V21BWuBXg2eB7L18qpVk7DSM&e= SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spdk.io_doc_iscsi.html-23iscsi-5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-h5zZerTcOn1D9wfxM&e=. Please see https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_spdk_spdk_blob_master_test_iscsi-5Ftgt_lvol_iscsi.conf&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=jSKH9IX5rn3DlmRDFR35I4V5I-bT1xxWSqSp1pIXygw&e= for example of minimal iSCSI config. Suggested flow of starting up applications is: 1. Unbind interfaces from kernel 2. Start VPP and configure the interface via vppctl 3. Start SPDK 4. Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app. Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well. Let me know if you have any questions. Tomek From: Isaac Otsiabah [mailto:IOtsiabah(a)us.fujitsu.com] Sent: Wednesday, April 11, 2018 8:47 PM To: Zawadzki, Tomasz > Cc: Harris, James R >; Verkamp, Daniel >; Paul Von-Stamwitz > Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP? Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application. For VPP, first, I unbind the nick from the kernel as and start VPP application. ./usertools/dpdk-devbind.py -u 0000:07:00.0 vpp unix {cli-listen /run/vpp/cli.sock} Unbinding the nic takes down the interface, however, the ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface to bind to during startup so it fails to start. The information at: "Running SPDK with VPP VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method." is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization? Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)? I would appreciate if anyone would help. Thank you. Isaac _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_mailman_listinfo_spdk&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=2iHpVGzaloMHLL179exqyisY-BLZOoEFh5Y4Z7SArYs&e=