From mboxrd@z Thu Jan 1 00:00:00 1970 From: Neil Horman Subject: Re: [PATCH v2] librte_pmd_packet: add PMD for AF_PACKET-based virtual devices Date: Tue, 15 Jul 2014 16:31:08 -0400 Message-ID: <20140715203108.GA20273@localhost.localdomain> References: <1405024369-30058-1-git-send-email-linville@tuxdriver.com> <1405362290-6753-1-git-send-email-linville@tuxdriver.com> <20140715121743.GA14273@localhost.localdomain> <20140715140111.GA26012@tuxdriver.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: "dev-VfR2kkLFssw@public.gmane.org" To: "John W. Linville" Return-path: Content-Disposition: inline In-Reply-To: <20140715140111.GA26012-2XuSBdqkA4R54TAoqtyWWQ@public.gmane.org> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" On Tue, Jul 15, 2014 at 10:01:11AM -0400, John W. Linville wrote: > On Tue, Jul 15, 2014 at 08:17:44AM -0400, Neil Horman wrote: > > On Tue, Jul 15, 2014 at 12:15:49AM +0000, Zhou, Danny wrote: > > > According to my performance measurement results for 64B small > > > packet, 1 queue perf. is better than 16 queues (1.35M pps vs. 0.93M > > > pps) which make sense to me as for 16 queues case more CPU cycles (16 > > > queues' 87% vs. 1 queue' 80%) in kernel land needed for NAPI-enabled > > > ixgbe driver to switch between polling and interrupt modes in order > > > to service per-queue rx interrupts, so more context switch overhead > > > involved. Also, since the eth_packet_rx/eth_packet_tx routines involves > > > in two memory copies between DPDK mbuf and pbuf for each packet, > > > it can hardly achieve high performance unless packet are directly > > > DMA to mbuf which needs ixgbe driver to support. > > > > I thought 16 queues would be spread out between as many cpus as you had though, > > obviating the need for context switches, no? > > I think Danny is testing the single CPU case. Having more queues > than CPUs probably does not provide any benefit. > Ah, yes, generally speaking, you never want nr_cpus < nr_queues. Otherwise you'll just be fighting yourself. > It would be cool to hack the DPDK memory management to work directly > out of the mmap'ed AF_PACKET buffers. But at this point I don't > have enough knowledge of DPDK internals to know if that is at all > reasonable... > > John > > P.S. Danny, have you run any performance tests on the PCAP driver? > > -- > John W. Linville Someday the world will need a hero, and you > linville-2XuSBdqkA4R54TAoqtyWWQ@public.gmane.org might be all we have. Be ready. >