All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	Netdev <netdev@vger.kernel.org>,
	kafai@fb.com, Daniel Borkmann <daniel@iogearbox.net>,
	Tom Herbert <tom@herbertland.com>,
	Brenden Blanco <bblanco@plumgrid.com>,
	john fastabend <john.fastabend@gmail.com>,
	Or Gerlitz <gerlitz.or@gmail.com>,
	Hannes Frederic Sowa <hannes@stressinduktion.org>,
	rana.shahot@gmail.com, Thomas Graf <tgraf@suug.ch>,
	"David S. Miller" <davem@davemloft.net>,
	as754m@att.com, Saeed Mahameed <saeedm@mellanox.com>,
	amira@mellanox.com, tzahio@mellanox.com,
	Eric Dumazet <eric.dumazet@gmail.com>,
	brouer@redhat.com
Subject: Re: [net-next PATCH RFC] mlx4: RX prefetch loop
Date: Tue, 12 Jul 2016 21:52:52 +0200	[thread overview]
Message-ID: <20160712215252.7c72cbed@redhat.com> (raw)
In-Reply-To: <CAKgT0Ud8wrz8jRe5JzaiycZmO3fxNhPudRiv7up4ZWFfYWfj4A@mail.gmail.com>

On Tue, 12 Jul 2016 09:46:26 -0700
Alexander Duyck <alexander.duyck@gmail.com> wrote:

> On Tue, Jul 12, 2016 at 5:45 AM, Jesper Dangaard Brouer
> <brouer@redhat.com> wrote:
> > On Mon, 11 Jul 2016 16:05:11 -0700
> > Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> >  
> >> On Mon, Jul 11, 2016 at 01:09:22PM +0200, Jesper Dangaard Brouer wrote:  
> >> > > - /* Process all completed CQEs */
> >> > > + /* Extract and prefetch completed CQEs */
> >> > >   while (XNOR(cqe->owner_sr_opcode & MLX4_CQE_OWNER_MASK,
> >> > >               cq->mcq.cons_index & cq->size)) {
> >> > > +         void *data;
> >> > >
> >> > >           frags = ring->rx_info + (index << priv->log_rx_info);
> >> > >           rx_desc = ring->buf + (index << ring->log_stride);
> >> > > +         prefetch(rx_desc);
> >> > >
> >> > >           /*
> >> > >            * make sure we read the CQE after we read the ownership bit
> >> > >            */
> >> > >           dma_rmb();
> >> > >
> >> > > +         cqe_array[cqe_idx++] = cqe;
> >> > > +
> >> > > +         /* Base error handling here, free handled in next loop */
> >> > > +         if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) ==
> >> > > +                      MLX4_CQE_OPCODE_ERROR))
> >> > > +                 goto skip;
> >> > > +
> >> > > +         data = page_address(frags[0].page) + frags[0].page_offset;
> >> > > +         prefetch(data);  
> >>
> >> that's probably not correct in all cases, since doing prefetch on the address
> >> that is going to be evicted soon may hurt performance.
> >> We need to dma_sync_single_for_cpu() before doing a prefetch or
> >> somehow figure out that dma_sync is a nop, so we can omit it altogether
> >> and do whatever prefetches we like.  
> >
> > Sure, DMA can be synced first (actually already played with this).  
> 
> Yes, but the point I think that Alexei is kind of indirectly getting
> at is that you are doing all your tests on x86 architecture are you
> not?  The x86 stuff is a very different beast from architectures like
> ARM which have a very different architecture when it comes to how they
> handle the memory organization of the system.  In the case of x86 the
> only time dma_sync is not a nop is if you force swiotlb to be enabled
> at which point the whole performance argument is kind of pointless
> anyway.
> 
> >> Also unconditionally doing batch of 8 may also hurt depending on what
> >> is happening either with the stack, bpf afterwards or even cpu version.  
> >
> > See this as software DDIO, if the unlikely case that data will get
> > evicted, it will still exist in L2 or L3 cache (like DDIO). Notice,
> > only 1024 bytes are getting prefetched here.  
> 
> I disagree.  DDIO only pushes received frames into the L3 cache.  What
> you are potentially doing is flooding the L2 cache.  The difference in
> size between the L3 and L2 caches is very significant.  L3 cache size
> is in the MB range while the L2 cache is only 256KB or so for Xeon
> processors and such.  In addition DDIO is really meant for an
> architecture that has a fairly large cache region to spare and it it
> limits itself to that cache region, the approach taken in this code
> could potentially prefetch a fairly significant chunk of memory.

No matter how you slice it, reading this memory is needed, as I'm
making sure only to prefetch packets that are "ready" and are within
the NAPI budget.  (eth_type_trans/eth_get_headlen)
 
> >> Doing single prefetch of Nth packet is probably ok most of the
> >> time, but asking cpu to prefetch 8 packets at once is unnecessary
> >> especially since single prefetch gives the same performance.  
> >
> > No, unconditionally prefetch of the Nth packet, will be wrong most
> > of the time, for real work loads, as Eric Dumazet already pointed
> > out.
> >
> > This patch does NOT unconditionally prefetch 8 packets.  Prefetching
> > _only_ happens when it is known that packets are ready in the RX
> > ring. We know this prefetch data will be used/touched, within the
> > NAPI cycle. Even if the processing of the packet flush L1 cache,
> > then it will be in L2 or L3 (like DDIO).  
> 
> I think the point you are missing here Jesper is that the packet isn't
> what will be flushed out of L1.  It will be all the data that had been
> fetched before that.  So for example the L1 cache can only hold 32K,
> and the way it is setup if you fetch the first 64 bytes of 8 pages you
> will have evicted everything that was in that cache set will be
> flushed out to L2.
> 
> Also it might be worth while to see what instruction is being used for
> the prefetch.  Last I knew for read prefetches it was prefetchnta on
> x86 which would only pull the data into the L1 cache as a
> "non-temporal" store.  If I am not mistaken I think you run the risk
> of having the data prefetched evicted back out and bypassing the L2
> and L3 caches unless it is modified.  That was kind of the point of
> the prefetchnta as it really meant to be a read-only prefetch and
> meant to avoid polluting the L2 and L3 caches.

#ifdef CONFIG_X86_32
# define BASE_PREFETCH		""
# define ARCH_HAS_PREFETCH
#else
# define BASE_PREFETCH		"prefetcht0 %P1"
#endif

static inline void prefetch(const void *x)
{
	alternative_input(BASE_PREFETCH, "prefetchnta %P1",
			  X86_FEATURE_XMM,
			  "m" (*(const char *)x));
}

Thanks for the hint. Looking at the code, it does look like 64 bit CPUs
with MMX does use the prefetchnta instruction.

DPDK use prefetcht1 instruction at RX (on 32 packets).  That might be
the better prefetch instruction to use (or prefetcht2).  Looked at the
arm64 code it does support prefetching, and googling shows arm64 also
supports prefetching to a specific cache level.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

  reply	other threads:[~2016-07-12 19:53 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-08  2:15 [PATCH v6 00/12] Add driver bpf hook for early packet drop and forwarding Brenden Blanco
2016-07-08  2:15 ` [PATCH v6 01/12] bpf: add XDP prog type for early driver filter Brenden Blanco
2016-07-09  8:14   ` Jesper Dangaard Brouer
2016-07-09 13:47     ` Tom Herbert
2016-07-10 13:37       ` Jesper Dangaard Brouer
2016-07-10 17:09         ` Brenden Blanco
2016-07-10 20:30           ` Tom Herbert
2016-07-11 10:15             ` Daniel Borkmann
2016-07-11 12:58               ` Jesper Dangaard Brouer
2016-07-10 20:27         ` Tom Herbert
2016-07-11 11:36           ` Jesper Dangaard Brouer
2016-07-10 20:56   ` Tom Herbert
2016-07-11 16:51     ` Brenden Blanco
2016-07-11 21:21       ` Daniel Borkmann
2016-07-10 21:04   ` Tom Herbert
2016-07-11 13:53     ` Jesper Dangaard Brouer
2016-07-08  2:15 ` [PATCH v6 02/12] net: add ndo to set xdp prog in adapter rx Brenden Blanco
2016-07-10 20:59   ` Tom Herbert
2016-07-11 10:35     ` Daniel Borkmann
2016-07-08  2:15 ` [PATCH v6 03/12] rtnl: add option for setting link xdp prog Brenden Blanco
2016-07-08  2:15 ` [PATCH v6 04/12] net/mlx4_en: add support for fast rx drop bpf program Brenden Blanco
2016-07-09 14:07   ` Or Gerlitz
2016-07-10 15:40     ` Brenden Blanco
2016-07-10 16:38       ` Tariq Toukan
2016-07-09 19:58   ` Saeed Mahameed
2016-07-09 21:37     ` Or Gerlitz
2016-07-10 15:25     ` Tariq Toukan
2016-07-10 16:05       ` Brenden Blanco
2016-07-11 11:48         ` Saeed Mahameed
2016-07-11 21:49           ` Brenden Blanco
2016-07-08  2:15 ` [PATCH v6 05/12] Add sample for adding simple drop program to link Brenden Blanco
2016-07-09 20:21   ` Saeed Mahameed
2016-07-11 11:09   ` Jamal Hadi Salim
2016-07-11 13:37     ` Jesper Dangaard Brouer
2016-07-16 14:55       ` Jamal Hadi Salim
2016-07-08  2:15 ` [PATCH v6 06/12] net/mlx4_en: add page recycle to prepare rx ring for tx support Brenden Blanco
2016-07-08  2:15 ` [PATCH v6 07/12] bpf: add XDP_TX xdp_action for direct forwarding Brenden Blanco
2016-07-08  2:15 ` [PATCH v6 08/12] net/mlx4_en: break out tx_desc write into separate function Brenden Blanco
2016-07-08  2:15 ` [PATCH v6 09/12] net/mlx4_en: add xdp forwarding and data write support Brenden Blanco
2016-07-08  2:15 ` [PATCH v6 10/12] bpf: enable direct packet data write for xdp progs Brenden Blanco
2016-07-08  2:15 ` [PATCH v6 11/12] bpf: add sample for xdp forwarding and rewrite Brenden Blanco
2016-07-08  2:15 ` [PATCH v6 12/12] net/mlx4_en: add prefetch in xdp rx path Brenden Blanco
2016-07-08  3:56   ` Eric Dumazet
2016-07-08  4:16     ` Alexei Starovoitov
2016-07-08  6:56       ` Eric Dumazet
2016-07-08 16:49         ` Brenden Blanco
2016-07-10 20:48           ` Tom Herbert
2016-07-10 20:50           ` Tom Herbert
2016-07-11 14:54             ` Jesper Dangaard Brouer
2016-07-08 15:20     ` Jesper Dangaard Brouer
2016-07-08 16:02       ` [net-next PATCH RFC] mlx4: RX prefetch loop Jesper Dangaard Brouer
2016-07-11 11:09         ` Jesper Dangaard Brouer
2016-07-11 16:00           ` Brenden Blanco
2016-07-11 23:05           ` Alexei Starovoitov
2016-07-12 12:45             ` Jesper Dangaard Brouer
2016-07-12 16:46               ` Alexander Duyck
2016-07-12 19:52                 ` Jesper Dangaard Brouer [this message]
2016-07-13  1:37                   ` Alexei Starovoitov
2016-07-10 16:14 ` [PATCH v6 00/12] Add driver bpf hook for early packet drop and forwarding Tariq Toukan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160712215252.7c72cbed@redhat.com \
    --to=brouer@redhat.com \
    --cc=alexander.duyck@gmail.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=amira@mellanox.com \
    --cc=as754m@att.com \
    --cc=bblanco@plumgrid.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=eric.dumazet@gmail.com \
    --cc=gerlitz.or@gmail.com \
    --cc=hannes@stressinduktion.org \
    --cc=john.fastabend@gmail.com \
    --cc=kafai@fb.com \
    --cc=netdev@vger.kernel.org \
    --cc=rana.shahot@gmail.com \
    --cc=saeedm@mellanox.com \
    --cc=tgraf@suug.ch \
    --cc=tom@herbertland.com \
    --cc=tzahio@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.