linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Jose Abreu <Jose.Abreu@synopsys.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-stm32@st-md-mailman.stormreply.com" 
	<linux-stm32@st-md-mailman.stormreply.com>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>,
	Joao Pinto <Joao.Pinto@synopsys.com>,
	"David S . Miller" <davem@davemloft.net>,
	Giuseppe Cavallaro <peppe.cavallaro@st.com>,
	Alexandre Torgue <alexandre.torgue@st.com>,
	Maxime Coquelin <mcoquelin.stm32@gmail.com>,
	Maxime Ripard <maxime.ripard@bootlin.com>,
	Chen-Yu Tsai <wens@csie.org>
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool
Date: Thu, 4 Jul 2019 15:04:41 +0300	[thread overview]
Message-ID: <20190704120441.GA6866@apalos> (raw)
In-Reply-To: <20190704135414.0dd5df76@carbon>

Hi Jesper,

> On Thu, 4 Jul 2019 10:13:37 +0000
> Jose Abreu <Jose.Abreu@synopsys.com> wrote:
> > > The page_pool DMA mapping cannot be "kept" when page traveling into the
> > > network stack attached to an SKB.  (Ilias and I have a long term plan[1]
> > > to allow this, but you cannot do it ATM).  
> > 
> > The reason I recycle the page is this previous call to:
> > 
> > 	skb_copy_to_linear_data()
> > 
> > So, technically, I'm syncing to CPU the page(s) and then memcpy to a 
> > previously allocated SKB ... So it's safe to just recycle the mapping I 
> > think.
> 
> I didn't notice the skb_copy_to_linear_data(), will copy the entire
> frame, thus leaving the page unused and avail for recycle.

Yea this is essentially a 'copybreak' without the byte limitation that other
drivers usually impose (remember mvneta was doing this for all packets < 256b)

That's why i was concerned on what will happen on > 1000b frames and what the
memory pressure is going to be. 
The trade off here is copying vs mapping/unmapping.

> 
> Then it looks like you are doing the correct thing.  I will appreciate
> if you could add a comment above the call like:
> 
>    /* Data payload copied into SKB, page ready for recycle */
>    page_pool_recycle_direct(rx_q->page_pool, buf->page);
> 
> 
> > Its kind of using bounce buffers and I do see performance gain in this 
> > (I think the reason is because my setup uses swiotlb for DMA mapping).
> > 
> > Anyway, I'm open to some suggestions on how to improve this ...
> 
> I was surprised to see page_pool being used outside the surrounding XDP
> APIs (included/net/xdp.h).  For you use-case, where you "just" use
> page_pool as a driver-local fast recycle-allocator for RX-ring that
> keeps pages DMA mapped, it does make a lot of sense.  It simplifies the
> driver a fair amount:
> 
>   3 files changed, 63 insertions(+), 144 deletions(-)
> 
> Thanks for demonstrating a use-case for page_pool besides XDP, and for
> simplifying a driver with this.

Same here thanks Jose,

> 
> 
> > > Also remember that the page_pool requires you driver to do the
> > > DMA-sync operation.  I see a dma_sync_single_for_cpu(), but I
> > > didn't see a dma_sync_single_for_device() (well, I noticed one
> > > getting removed). (For some HW Ilias tells me that the
> > > dma_sync_single_for_device can be elided, so maybe this can still
> > > be correct for you).  
> > 
> > My HW just needs descriptors refilled which are in different coherent 
> > region so I don't see any reason for dma_sync_single_for_device() ...
> 
> For you use-case, given you are copying out the data, and not writing
> into it, then I don't think you need to do sync for device (before
> giving the device the page again for another RX-ring cycle).
> 
> The way I understand the danger: if writing to the DMA memory region,
> and not doing the DMA-sync for-device, then the HW/coherency-system can
> write-back the memory later.  Which creates a race with the DMA-device,
> if it is receiving a packet and is doing a write into same DMA memory
> region.  Someone correct me if I misunderstood this...

Similar understanding here

Cheers
/Ilias

  reply	other threads:[~2019-07-04 12:04 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-03 10:37 [PATCH net-next 0/3] net: stmmac: Some performance improvements and a fix Jose Abreu
2019-07-03 10:37 ` [PATCH net-next 1/3] net: stmmac: Implement RX Coalesce Frames setting Jose Abreu
2019-07-03 20:41   ` Jakub Kicinski
2019-07-03 10:37 ` [PATCH net-next 2/3] net: stmmac: Fix descriptors address being in > 32 bits address space Jose Abreu
2019-07-03 10:37 ` [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool Jose Abreu
2019-07-03 10:40   ` Jose Abreu
2019-07-04  9:39   ` Jesper Dangaard Brouer
2019-07-04 14:45     ` Jose Abreu
2019-07-04 15:09       ` Jesper Dangaard Brouer
2019-07-04 15:18         ` Jose Abreu
2019-07-04 15:33           ` Jesper Dangaard Brouer
2019-07-04  9:48   ` Jesper Dangaard Brouer
2019-07-04 10:00   ` Jesper Dangaard Brouer
2019-07-04 10:13     ` Jose Abreu
2019-07-04 11:11       ` Ilias Apalodimas
2019-07-04 11:54       ` Jesper Dangaard Brouer
2019-07-04 12:04         ` Ilias Apalodimas [this message]
2019-07-04 12:59           ` Jose Abreu
2019-07-04 13:06             ` Ilias Apalodimas
2019-07-04 10:30     ` Ilias Apalodimas
2019-07-04 12:14       ` Arnd Bergmann
2019-07-04 12:49         ` Ilias Apalodimas
2019-07-17 18:58   ` Jon Hunter
2019-07-18  7:29     ` Jose Abreu
2019-07-18  7:48     ` Jose Abreu
2019-07-18  9:16       ` Jon Hunter
2019-07-19  7:51         ` Jose Abreu
2019-07-19  8:37           ` Jon Hunter
2019-07-19  8:44             ` Jose Abreu
2019-07-19  8:49               ` Jon Hunter
2019-07-19 10:25                 ` Jose Abreu
2019-07-19 12:28                   ` Jose Abreu
2019-07-19 13:33                     ` Jon Hunter
2019-07-19 12:30                   ` Jon Hunter
2019-07-19 12:32                     ` Jose Abreu
2019-07-19 13:35                       ` Jon Hunter
2019-07-22  7:23                         ` Jose Abreu
2019-07-22  9:37                           ` Jon Hunter
2019-07-22  9:47                             ` Jose Abreu
2019-07-22  9:57                               ` Jose Abreu
2019-07-22 10:27                                 ` Jon Hunter
2019-07-22 10:18       ` Ilias Apalodimas
2019-07-22 11:11         ` Lars Persson
2019-07-22 11:39           ` Jose Abreu
2019-07-22 12:05             ` Jon Hunter
2019-07-22 14:04               ` Jose Abreu
2019-07-23  8:14                 ` Jose Abreu
2019-07-23 10:01                   ` Jon Hunter
2019-07-23 10:07                     ` Jose Abreu
2019-07-23 10:29                       ` Robin Murphy
2019-07-23 11:22                         ` Jose Abreu
2019-07-23 12:09                         ` Jon Hunter
2019-07-23 13:19                           ` Robin Murphy
2019-07-23 21:39                             ` Jon Hunter
2019-07-24 10:03                               ` Robin Murphy
2019-07-23 18:51                           ` David Miller
2019-07-24  8:54                             ` Ilias Apalodimas
2019-07-24  9:43                               ` Jose Abreu
2019-07-24  9:53                                 ` Ilias Apalodimas
2019-07-24 10:04                                   ` Jose Abreu
2019-07-24 11:10                                     ` Jon Hunter
2019-07-24 11:34                                       ` Jose Abreu
2019-07-24 11:58                                         ` Jon Hunter
2019-07-25  7:44                                           ` Jose Abreu
2019-07-25  9:45                                             ` Jon Hunter
2019-07-25 11:39                                               ` Ilias Apalodimas
2019-07-23 10:38                       ` Jon Hunter
2019-07-23 10:49                         ` Jose Abreu
2019-07-23 11:58                           ` Jon Hunter
2019-07-23 12:51                             ` Jose Abreu
2019-07-23 13:34                               ` Jon Hunter
2019-07-29  9:45                                 ` Mikko Perttunen
2019-07-25 13:20   ` Jon Hunter
2019-07-25 13:26     ` Jose Abreu
2019-07-25 14:25       ` Jon Hunter
2019-07-25 15:12         ` Jose Abreu
2019-07-26 14:11           ` Jon Hunter
2019-07-27 15:56             ` Jose Abreu
2019-07-29  8:16               ` Jose Abreu
2019-07-29 10:55                 ` Jon Hunter
2019-07-29 11:29                   ` Jose Abreu
2019-07-29 11:52                     ` Robin Murphy
2019-07-29 14:08                       ` Jose Abreu
2019-07-29 21:33                         ` Jon Hunter
2019-07-30  9:39                           ` Jose Abreu
2019-07-30 13:36                             ` Jon Hunter
2019-07-30 13:58                               ` Jose Abreu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190704120441.GA6866@apalos \
    --to=ilias.apalodimas@linaro.org \
    --cc=Joao.Pinto@synopsys.com \
    --cc=Jose.Abreu@synopsys.com \
    --cc=alexandre.torgue@st.com \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-stm32@st-md-mailman.stormreply.com \
    --cc=maxime.ripard@bootlin.com \
    --cc=mcoquelin.stm32@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=peppe.cavallaro@st.com \
    --cc=wens@csie.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).