From mboxrd@z Thu Jan 1 00:00:00 1970 From: Robert Sanford Subject: [PATCH 4/4] port: fix ethdev writer burst too big Date: Mon, 28 Mar 2016 16:51:37 -0400 Message-ID: <1459198297-49854-5-git-send-email-rsanford@akamai.com> References: <1459198297-49854-1-git-send-email-rsanford@akamai.com> To: dev@dpdk.org, cristian.dumitrescu@intel.com Return-path: Received: from mail-vk0-f66.google.com (mail-vk0-f66.google.com [209.85.213.66]) by dpdk.org (Postfix) with ESMTP id A3D07475E for ; Mon, 28 Mar 2016 22:52:18 +0200 (CEST) Received: by mail-vk0-f66.google.com with SMTP id e6so16778504vkh.1 for ; Mon, 28 Mar 2016 13:52:18 -0700 (PDT) In-Reply-To: <1459198297-49854-1-git-send-email-rsanford@akamai.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For f_tx_bulk functions in rte_port_ethdev.c, we may unintentionally send bursts larger than tx_burst_sz to the underlying ethdev. Some PMDs (e.g., ixgbe) may truncate this request to their maximum burst size, resulting in unnecessary enqueuing failures or ethdev writer retries. We propose to fix this by moving the tx buffer flushing logic from *after* the loop that puts all packets into the tx buffer, to *inside* the loop, testing for a full burst when adding each packet. Signed-off-by: Robert Sanford --- lib/librte_port/rte_port_ethdev.c | 20 ++++++++++---------- 1 files changed, 10 insertions(+), 10 deletions(-) diff --git a/lib/librte_port/rte_port_ethdev.c b/lib/librte_port/rte_port_ethdev.c index 3fb4947..1283338 100644 --- a/lib/librte_port/rte_port_ethdev.c +++ b/lib/librte_port/rte_port_ethdev.c @@ -151,7 +151,7 @@ static int rte_port_ethdev_reader_stats_read(void *port, struct rte_port_ethdev_writer { struct rte_port_out_stats stats; - struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX]; + struct rte_mbuf *tx_buf[RTE_PORT_IN_BURST_SIZE_MAX]; uint32_t tx_burst_sz; uint16_t tx_buf_count; uint64_t bsz_mask; @@ -257,11 +257,11 @@ rte_port_ethdev_writer_tx_bulk(void *port, p->tx_buf[tx_buf_count++] = pkt; RTE_PORT_ETHDEV_WRITER_STATS_PKTS_IN_ADD(p, 1); pkts_mask &= ~pkt_mask; - } - p->tx_buf_count = tx_buf_count; - if (tx_buf_count >= p->tx_burst_sz) - send_burst(p); + p->tx_buf_count = tx_buf_count; + if (tx_buf_count >= p->tx_burst_sz) + send_burst(p); + } } return 0; @@ -328,7 +328,7 @@ static int rte_port_ethdev_writer_stats_read(void *port, struct rte_port_ethdev_writer_nodrop { struct rte_port_out_stats stats; - struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX]; + struct rte_mbuf *tx_buf[RTE_PORT_IN_BURST_SIZE_MAX]; uint32_t tx_burst_sz; uint16_t tx_buf_count; uint64_t bsz_mask; @@ -466,11 +466,11 @@ rte_port_ethdev_writer_nodrop_tx_bulk(void *port, p->tx_buf[tx_buf_count++] = pkt; RTE_PORT_ETHDEV_WRITER_NODROP_STATS_PKTS_IN_ADD(p, 1); pkts_mask &= ~pkt_mask; - } - p->tx_buf_count = tx_buf_count; - if (tx_buf_count >= p->tx_burst_sz) - send_burst_nodrop(p); + p->tx_buf_count = tx_buf_count; + if (tx_buf_count >= p->tx_burst_sz) + send_burst_nodrop(p); + } } return 0; -- 1.7.1