From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mallesh Koujalagi Subject: [PATCH v2] net/null: support bulk allocation Date: Thu, 8 Mar 2018 15:40:41 -0800 Message-ID: <1520552441-20833-1-git-send-email-malleshx.koujalagi@intel.com> References: <1517627510-60932-1-git-send-email-malleshx.koujalagi@intel.com> Cc: mtetsuyah@gmail.com, Mallesh Koujalagi To: dev@dpdk.org, ferruh.yigit@intel.com, konstantin.ananyev@intel.com Return-path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 437D25F12 for ; Fri, 9 Mar 2018 00:41:03 +0100 (CET) In-Reply-To: <1517627510-60932-1-git-send-email-malleshx.koujalagi@intel.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Bulk allocation of multiple mbufs increased more than ~2% and less than 8% throughput on single core (1.8 GHz), based on usage for example 1: Testpmd case: Two null devices with copy 8% improvement. testpmd -c 0x3 -n 4 --socket-mem 1024,1024 --vdev 'eth_null0,size=64,copy=1' --vdev 'eth_null1,size=64,copy=1' -- -i -a --coremask=0x2 --txrst=64 --txfreet=64 --txd=256 --rxd=256 --rxfreet=64 --burst=64 --txpt=64 --txq=1 --rxq=1 --numa 2. Ovs switch case: 2% improvement. $VSCTL add-port ovs-br dpdk1 -- set Interface dpdk1 type=dpdk \ options:dpdk-devargs=eth_null0,size=64,copy=1 $VSCTL add-port ovs-br dpdk2 -- set Interface dpdk2 type=dpdk \ options:dpdk-devargs=eth_null1,size=64,copy=1 Signed-off-by: Mallesh Koujalagi --- drivers/net/null/rte_eth_null.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c index 9385ffd..c019d2d 100644 --- a/drivers/net/null/rte_eth_null.c +++ b/drivers/net/null/rte_eth_null.c @@ -105,10 +105,10 @@ eth_null_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) return 0; packet_size = h->internals->packet_size; + if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0) + return 0; + for (i = 0; i < nb_bufs; i++) { - bufs[i] = rte_pktmbuf_alloc(h->mb_pool); - if (!bufs[i]) - break; bufs[i]->data_len = (uint16_t)packet_size; bufs[i]->pkt_len = packet_size; bufs[i]->port = h->internals->port_id; @@ -130,10 +130,10 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) return 0; packet_size = h->internals->packet_size; + if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0) + return 0; + for (i = 0; i < nb_bufs; i++) { - bufs[i] = rte_pktmbuf_alloc(h->mb_pool); - if (!bufs[i]) - break; rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet, packet_size); bufs[i]->data_len = (uint16_t)packet_size; -- 2.7.4