From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jeff Shaw Subject: Re: [PATCH v4 10/15] fm10k: add receive and tranmit function Date: Wed, 11 Feb 2015 09:28:47 -0800 Message-ID: <20150211172847.GA2984@plxv1143.pdx.intel.com> References: <1423551775-3604-2-git-send-email-jing.d.chen@intel.com> <1423618298-2933-1-git-send-email-jing.d.chen@intel.com> <1423618298-2933-11-git-send-email-jing.d.chen@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: dev-VfR2kkLFssw@public.gmane.org To: "Chen Jing D(Mark)" Return-path: Content-Disposition: inline In-Reply-To: <1423618298-2933-11-git-send-email-jing.d.chen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" On Wed, Feb 11, 2015 at 09:31:33AM +0800, Chen Jing D(Mark) wrote: > +uint16_t > +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts) > +{ > + struct rte_mbuf *mbuf; > + union fm10k_rx_desc desc; > + struct fm10k_rx_queue *q = rx_queue; > + uint16_t count = 0; > + int alloc = 0; > + uint16_t next_dd; > + > + next_dd = q->next_dd; > + > + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); > + for (count = 0; count < nb_pkts; ++count) { > + mbuf = q->sw_ring[next_dd]; > + desc = q->hw_ring[next_dd]; > + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) > + break; > +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX > + dump_rxd(&desc); > +#endif > + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; > + rte_pktmbuf_data_len(mbuf) = desc.w.length; > + > + mbuf->ol_flags = 0; > +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE > + rx_desc_to_ol_flags(mbuf, &desc); > +#endif > + > + mbuf->hash.rss = desc.d.rss; > + > + rx_pkts[count] = mbuf; > + if (++next_dd == q->nb_desc) { > + next_dd = 0; > + alloc = 1; > + } > + > + /* Prefetch next mbuf while processing current one. */ > + rte_prefetch0(q->sw_ring[next_dd]); > + > + /* > + * When next RX descriptor is on a cache-line boundary, > + * prefetch the next 4 RX descriptors and the next 8 pointers > + * to mbufs. > + */ > + if ((next_dd & 0x3) == 0) { > + rte_prefetch0(&q->hw_ring[next_dd]); > + rte_prefetch0(&q->sw_ring[next_dd]); > + } > + } > + > + q->next_dd = next_dd; > + > + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { > + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], > + q->alloc_thresh); The return value should be checked here in case the mempool runs out of buffers. Thanks Helin for spotting this. I'm not sure how I missed it originally. > + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { > + mbuf = q->sw_ring[q->next_alloc]; > + > + /* setup static mbuf fields */ > + fm10k_pktmbuf_reset(mbuf, q->port_id); > + > + /* write descriptor */ > + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); > + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); > + q->hw_ring[q->next_alloc] = desc; > + } > + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); > + q->next_trigger += q->alloc_thresh; > + if (q->next_trigger >= q->nb_desc) { > + q->next_trigger = q->alloc_thresh - 1; > + q->next_alloc = 0; > + } > + } > + > + return count; > +} > + Thanks, Jeff