linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Jesper Dangaard Brouer <jbrouer@redhat.com>
Cc: Yunsheng Lin <linyunsheng@huawei.com>,
	brouer@redhat.com,
	Guillaume Tucker <guillaume.tucker@collabora.com>,
	davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, linuxarm@openeuler.org,
	hawk@kernel.org, akpm@linux-foundation.org, peterz@infradead.org,
	will@kernel.org, jhubbard@nvidia.com, yuzhao@google.com,
	mcroce@microsoft.com, fenghua.yu@intel.com, feng.tang@intel.com,
	jgg@ziepe.ca, aarcange@redhat.com, guro@fb.com,
	"kernelci@groups.io" <kernelci@groups.io>
Subject: Re: [PATCH net-next v6] page_pool: disable dma mapping support for 32-bit arch with 64-bit DMA
Date: Mon, 15 Nov 2021 20:55:11 +0200	[thread overview]
Message-ID: <YZKtD4+13gQBXEX6@apalos.home> (raw)
In-Reply-To: <8c688448-e8a9-5a6b-7b17-ccd294a416d3@redhat.com>


[...] 

> > > > > > > > Some more details can be found here:
> > > > > > > > 
> > > > > > > >    https://linux.kernelci.org/test/case/id/6189968c3ec0a3c06e3358fe/
> > > > > > > > 
> > > > > > > > Here's the same revision on the same platform booting fine with a
> > > > > > > > plain multi_v7_defconfig build:
> > > > > > > > 
> > > > > > > >    https://linux.kernelci.org/test/plan/id/61899d322c0e9fee7e3358ec/
> > > > > > > > 
> > > > > > > > Please let us know if you need any help debugging this issue or
> > > > > > > > if you have a fix to try.
> > > > > > > 
> > > > > > > The patch below is removing the dma mapping support in page pool
> > > > > > > for 32 bit systems with 64 bit dma address, so it seems there
> > > > > > > is indeed a a drvier using the the page pool with PP_FLAG_DMA_MAP
> > > > > > > flags set in a 32 bit systems with 64 bit dma address.
> > > > > > > 
> > > > > > > It seems we might need to revert the below patch or implement the
> > > > > > > DMA-mapping tracking support in the driver as mentioned in the below
> > > > > > > commit log.
> > > > > > > 
> > > > > > > which ethernet driver do you use in your system?
> > > > > > 
> > > > > > Thanks for taking a look and sorry for the slow reply.  Here's a
> > > > > > booting test job with LPAE disabled:
> > > > > > 
> > > > > >      https://linux.kernelci.org/test/plan/id/618dbb81c60c4d94503358f1/
> > > > > >      https://storage.kernelci.org/mainline/master/v5.15-12452-g5833291ab6de/arm/multi_v7_defconfig/gcc-10/lab-collabora/baseline-nfs-rk3288-rock2-square.html#L812
> > > > > > 
> > > > > > [    8.314523] rk_gmac-dwmac ff290000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
> > > > > > 
> > > > > > So the driver is drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
> > > > > 
> > > > > Thanks for the report, this patch seems to cause problem for 32-bit
> > > > > system with LPAE enabled.
> > > > > 
> > > > > As LPAE seems like a common feature for 32 bits system, this patch
> > > > > might need to be reverted.
> > > > > 
> > > > > @Jesper, @Ilias, what do you think?
> > > > 
> > > > 
> > > > So enabling LPAE also enables CONFIG_ARCH_DMA_ADDR_T_64BIT on that board?
> > > > Doing a quick grep only selects that for XEN.  I am ok reverting that,  but
> > > > I think we need to understand how the dma address ended up being 64bit.
> > > 
> > > So looking a bit closer, indeed enabling LPAE always enables this.  So
> > > we need to revert the patch.
> > > Yunsheng will you send that?
> > 
> > Sure.
> 
> Why don't we change that driver[1] to not use page_pool_get_dma_addr() ?
> 
>  [1] drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
> 
> I took a closer look and it seems the driver have struct stmmac_rx_buffer in
> which is stored the dma_addr it gets from page_pool_get_dma_addr().
> 
> See func: stmmac_init_rx_buffers
> 
>  static int stmmac_init_rx_buffers(struct stmmac_priv *priv,
> 				struct dma_desc *p,
> 				int i, gfp_t flags, u32 queue)
>  {
> 
> 	if (!buf->page) {
> 		buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> 		if (!buf->page)
> 			return -ENOMEM;
> 		buf->page_offset = stmmac_rx_offset(priv);
> 	}
> 	[...]
> 
> 	buf->addr = page_pool_get_dma_addr(buf->page) + buf->page_offset;
> 
> 	stmmac_set_desc_addr(priv, p, buf->addr);
> 	[...]
>  }
> 
> I question if this driver really to use page_pool for storing the dma_addr
> as it just extract it and store it outside page_pool?
> 
> @Ilias it looks like you added part of the page_pool support in this driver,
> so I hope you can give a qualified guess on:
> How much work will it be to let driver do the DMA-map itself?
> (and not depend on the DMA-map feature provided by page_pool)

It shouldn't be that hard.   However when we removed that we were hoping we
had no active consumers.  So we'll have to fix this and check for other
32-bit boards with LPAE and page_pool handling the DMA mappings.
But the point now is that this is far from a 'hardware configuration' of
32-bit CPU + 64-bit DMA.  Every armv7 and x86 board can get that.  So I was
thinking it's better to revert this and live with the 'weird' handling in the
code.

Cheers
/Ilias

  reply	other threads:[~2021-11-16  0:29 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-13  9:19 [PATCH net-next v6] page_pool: disable dma mapping support for 32-bit arch with 64-bit DMA Yunsheng Lin
2021-10-13 10:21 ` Jesper Dangaard Brouer
2021-10-15 10:00 ` patchwork-bot+netdevbpf
2021-11-09  9:58 ` Guillaume Tucker
2021-11-09 12:02   ` Yunsheng Lin
2021-11-12  9:21     ` Guillaume Tucker
2021-11-15  3:34       ` Yunsheng Lin
2021-11-15 11:53         ` Ilias Apalodimas
2021-11-15 12:10           ` Ilias Apalodimas
2021-11-15 12:21             ` Yunsheng Lin
2021-11-15 14:39               ` Jesper Dangaard Brouer
2021-11-15 18:55                 ` Ilias Apalodimas [this message]
2021-11-17 11:52                   ` Jesper Dangaard Brouer
2021-11-15 17:59           ` Christoph Hellwig
2021-11-15 18:48             ` Ilias Apalodimas
2021-11-15 18:53               ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YZKtD4+13gQBXEX6@apalos.home \
    --to=ilias.apalodimas@linaro.org \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=feng.tang@intel.com \
    --cc=fenghua.yu@intel.com \
    --cc=guillaume.tucker@collabora.com \
    --cc=guro@fb.com \
    --cc=hawk@kernel.org \
    --cc=jbrouer@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=kernelci@groups.io \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@openeuler.org \
    --cc=linyunsheng@huawei.com \
    --cc=mcroce@microsoft.com \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=will@kernel.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).