From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jesper Dangaard Brouer Subject: Re: [net-next PATCH RFC] mlx4: RX prefetch loop Date: Tue, 12 Jul 2016 21:52:52 +0200 Message-ID: <20160712215252.7c72cbed@redhat.com> References: <20160708172024.4a849b7a@redhat.com> <20160708160135.27507.77711.stgit@firesoul> <20160711130922.636ee4e6@redhat.com> <20160711230509.GA45195@ast-mbp.thefacebook.com> <20160712144521.4244e8ab@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: Alexei Starovoitov , Netdev , kafai@fb.com, Daniel Borkmann , Tom Herbert , Brenden Blanco , john fastabend , Or Gerlitz , Hannes Frederic Sowa , rana.shahot@gmail.com, Thomas Graf , "David S. Miller" , as754m@att.com, Saeed Mahameed , amira@mellanox.com, tzahio@mellanox.com, Eric Dumazet , brouer@redhat.com To: Alexander Duyck Return-path: Received: from mx1.redhat.com ([209.132.183.28]:42881 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751262AbcGLTxk (ORCPT ); Tue, 12 Jul 2016 15:53:40 -0400 In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: On Tue, 12 Jul 2016 09:46:26 -0700 Alexander Duyck wrote: > On Tue, Jul 12, 2016 at 5:45 AM, Jesper Dangaard Brouer > wrote: > > On Mon, 11 Jul 2016 16:05:11 -0700 > > Alexei Starovoitov wrote: > > > >> On Mon, Jul 11, 2016 at 01:09:22PM +0200, Jesper Dangaard Brouer wrote: > >> > > - /* Process all completed CQEs */ > >> > > + /* Extract and prefetch completed CQEs */ > >> > > while (XNOR(cqe->owner_sr_opcode & MLX4_CQE_OWNER_MASK, > >> > > cq->mcq.cons_index & cq->size)) { > >> > > + void *data; > >> > > > >> > > frags = ring->rx_info + (index << priv->log_rx_info); > >> > > rx_desc = ring->buf + (index << ring->log_stride); > >> > > + prefetch(rx_desc); > >> > > > >> > > /* > >> > > * make sure we read the CQE after we read the ownership bit > >> > > */ > >> > > dma_rmb(); > >> > > > >> > > + cqe_array[cqe_idx++] = cqe; > >> > > + > >> > > + /* Base error handling here, free handled in next loop */ > >> > > + if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) == > >> > > + MLX4_CQE_OPCODE_ERROR)) > >> > > + goto skip; > >> > > + > >> > > + data = page_address(frags[0].page) + frags[0].page_offset; > >> > > + prefetch(data); > >> > >> that's probably not correct in all cases, since doing prefetch on the address > >> that is going to be evicted soon may hurt performance. > >> We need to dma_sync_single_for_cpu() before doing a prefetch or > >> somehow figure out that dma_sync is a nop, so we can omit it altogether > >> and do whatever prefetches we like. > > > > Sure, DMA can be synced first (actually already played with this). > > Yes, but the point I think that Alexei is kind of indirectly getting > at is that you are doing all your tests on x86 architecture are you > not? The x86 stuff is a very different beast from architectures like > ARM which have a very different architecture when it comes to how they > handle the memory organization of the system. In the case of x86 the > only time dma_sync is not a nop is if you force swiotlb to be enabled > at which point the whole performance argument is kind of pointless > anyway. > > >> Also unconditionally doing batch of 8 may also hurt depending on what > >> is happening either with the stack, bpf afterwards or even cpu version. > > > > See this as software DDIO, if the unlikely case that data will get > > evicted, it will still exist in L2 or L3 cache (like DDIO). Notice, > > only 1024 bytes are getting prefetched here. > > I disagree. DDIO only pushes received frames into the L3 cache. What > you are potentially doing is flooding the L2 cache. The difference in > size between the L3 and L2 caches is very significant. L3 cache size > is in the MB range while the L2 cache is only 256KB or so for Xeon > processors and such. In addition DDIO is really meant for an > architecture that has a fairly large cache region to spare and it it > limits itself to that cache region, the approach taken in this code > could potentially prefetch a fairly significant chunk of memory. No matter how you slice it, reading this memory is needed, as I'm making sure only to prefetch packets that are "ready" and are within the NAPI budget. (eth_type_trans/eth_get_headlen) > >> Doing single prefetch of Nth packet is probably ok most of the > >> time, but asking cpu to prefetch 8 packets at once is unnecessary > >> especially since single prefetch gives the same performance. > > > > No, unconditionally prefetch of the Nth packet, will be wrong most > > of the time, for real work loads, as Eric Dumazet already pointed > > out. > > > > This patch does NOT unconditionally prefetch 8 packets. Prefetching > > _only_ happens when it is known that packets are ready in the RX > > ring. We know this prefetch data will be used/touched, within the > > NAPI cycle. Even if the processing of the packet flush L1 cache, > > then it will be in L2 or L3 (like DDIO). > > I think the point you are missing here Jesper is that the packet isn't > what will be flushed out of L1. It will be all the data that had been > fetched before that. So for example the L1 cache can only hold 32K, > and the way it is setup if you fetch the first 64 bytes of 8 pages you > will have evicted everything that was in that cache set will be > flushed out to L2. > > Also it might be worth while to see what instruction is being used for > the prefetch. Last I knew for read prefetches it was prefetchnta on > x86 which would only pull the data into the L1 cache as a > "non-temporal" store. If I am not mistaken I think you run the risk > of having the data prefetched evicted back out and bypassing the L2 > and L3 caches unless it is modified. That was kind of the point of > the prefetchnta as it really meant to be a read-only prefetch and > meant to avoid polluting the L2 and L3 caches. #ifdef CONFIG_X86_32 # define BASE_PREFETCH "" # define ARCH_HAS_PREFETCH #else # define BASE_PREFETCH "prefetcht0 %P1" #endif static inline void prefetch(const void *x) { alternative_input(BASE_PREFETCH, "prefetchnta %P1", X86_FEATURE_XMM, "m" (*(const char *)x)); } Thanks for the hint. Looking at the code, it does look like 64 bit CPUs with MMX does use the prefetchnta instruction. DPDK use prefetcht1 instruction at RX (on 32 packets). That might be the better prefetch instruction to use (or prefetcht2). Looked at the arm64 code it does support prefetching, and googling shows arm64 also supports prefetching to a specific cache level. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat Author of http://www.iptv-analyzer.org LinkedIn: http://www.linkedin.com/in/brouer