netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [net-next PATCH 0/2] Fix page_pool API and dma address storage
@ 2019-02-11 16:06 Jesper Dangaard Brouer
  2019-02-11 16:06 ` [net-next PATCH 1/2] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
  2019-02-11 16:06 ` [net-next PATCH 2/2] net: page_pool: don't use page->private to store dma_addr_t Jesper Dangaard Brouer
  0 siblings, 2 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-11 16:06 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Toke Høiland-Jørgensen, Ilias Apalodimas, willy,
	Saeed Mahameed, Jesper Dangaard Brouer, Andrew Morton, mgorman,
	David S. Miller, Tariq Toukan

As pointed out by David Miller in [1] the current page_pool implementation
stores dma_addr_t in page->private. This won't work on 32-bit platforms with
64-bit DMA addresses since the page->private is an unsigned long and the
dma_addr_t a u64.

Since no driver is yet using the DMA mapping capabilities of the API let's
fix this by storing the information in 'struct page' and use that to store
and retrieve DMA addresses from network drivers.

As long as the addresses returned from dma_map_page() are aligned the first
bit, used by the compound pages code should not be set.

Ilias tested this on Espressobin driver mvneta, for which we have patches
for using the DMA API of page_pool.

[1]: https://lore.kernel.org/netdev/20181207.230655.1261252486319967024.davem@davemloft.net/

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>

---
Question: Who of the maintainers MM or netdev will take these changes?

---

Ilias Apalodimas (1):
      net: page_pool: don't use page->private to store dma_addr_t

Jesper Dangaard Brouer (1):
      mm: add dma_addr_t to struct page


 include/linux/mm_types.h |    8 ++++++++
 net/core/page_pool.c     |   13 +++++++++----
 2 files changed, 17 insertions(+), 4 deletions(-)

--

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [net-next PATCH 1/2] mm: add dma_addr_t to struct page
  2019-02-11 16:06 [net-next PATCH 0/2] Fix page_pool API and dma address storage Jesper Dangaard Brouer
@ 2019-02-11 16:06 ` Jesper Dangaard Brouer
  2019-02-11 16:55   ` Matthew Wilcox
  2019-02-11 20:16   ` Andrew Morton
  2019-02-11 16:06 ` [net-next PATCH 2/2] net: page_pool: don't use page->private to store dma_addr_t Jesper Dangaard Brouer
  1 sibling, 2 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-11 16:06 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Toke Høiland-Jørgensen, Ilias Apalodimas, willy,
	Saeed Mahameed, Jesper Dangaard Brouer, Andrew Morton, mgorman,
	David S. Miller, Tariq Toukan

The page_pool API is using page->private to store DMA addresses.
As pointed out by David Miller we can't use that on 32-bit architectures
with 64-bit DMA

This patch adds a new dma_addr_t struct to allow storing DMA addresses

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
 include/linux/mm_types.h |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 2c471a2c43fa..3060700752cc 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -95,6 +95,14 @@ struct page {
 			 */
 			unsigned long private;
 		};
+		struct {	/* page_pool used by netstack */
+			/**
+			 * @dma_addr: Page_pool need to store DMA-addr, and
+			 * cannot use @private, as DMA-mappings can be 64-bit
+			 * even on 32-bit Architectures.
+			 */
+			dma_addr_t dma_addr; /* Shares area with @lru */
+		};
 		struct {	/* slab, slob and slub */
 			union {
 				struct list_head slab_list;	/* uses lru */


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [net-next PATCH 2/2] net: page_pool: don't use page->private to store dma_addr_t
  2019-02-11 16:06 [net-next PATCH 0/2] Fix page_pool API and dma address storage Jesper Dangaard Brouer
  2019-02-11 16:06 ` [net-next PATCH 1/2] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
@ 2019-02-11 16:06 ` Jesper Dangaard Brouer
  2019-02-11 19:31   ` Alexander Duyck
  1 sibling, 1 reply; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-11 16:06 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Toke Høiland-Jørgensen, Ilias Apalodimas, willy,
	Saeed Mahameed, Jesper Dangaard Brouer, Andrew Morton, mgorman,
	David S. Miller, Tariq Toukan

From: Ilias Apalodimas <ilias.apalodimas@linaro.org>

As pointed out by David Miller the current page_pool implementation
stores dma_addr_t in page->private.
This won't work on 32-bit platforms with 64-bit DMA addresses since the
page->private is an unsigned long and the dma_addr_t a u64.

A previous patch is adding dma_addr_t on struct page to accommodate this.
This patch adapts the page_pool related functions to use the newly added
struct for storing and retrieving DMA addresses from network drivers.

Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 net/core/page_pool.c |   13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 43a932cb609b..897a69a1477e 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -136,7 +136,9 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 	if (!(pool->p.flags & PP_FLAG_DMA_MAP))
 		goto skip_dma_map;
 
-	/* Setup DMA mapping: use page->private for DMA-addr
+	/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
+	 * since dma_addr_t can be either 32 or 64 bits and does not always fit
+	 * into page private data (i.e 32bit cpu with 64bit DMA caps)
 	 * This mapping is kept for lifetime of page, until leaving pool.
 	 */
 	dma = dma_map_page(pool->p.dev, page, 0,
@@ -146,7 +148,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 		put_page(page);
 		return NULL;
 	}
-	set_page_private(page, dma); /* page->private = dma; */
+	page->dma_addr = dma;
 
 skip_dma_map:
 	/* When page just alloc'ed is should/must have refcnt 1. */
@@ -175,13 +177,16 @@ EXPORT_SYMBOL(page_pool_alloc_pages);
 static void __page_pool_clean_page(struct page_pool *pool,
 				   struct page *page)
 {
+	dma_addr_t dma;
+
 	if (!(pool->p.flags & PP_FLAG_DMA_MAP))
 		return;
 
+	dma = page->dma_addr;
 	/* DMA unmap */
-	dma_unmap_page(pool->p.dev, page_private(page),
+	dma_unmap_page(pool->p.dev, dma,
 		       PAGE_SIZE << pool->p.order, pool->p.dma_dir);
-	set_page_private(page, 0);
+	page->dma_addr = 0;
 }
 
 /* Return a page to the page allocator, cleaning up our state */


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [net-next PATCH 1/2] mm: add dma_addr_t to struct page
  2019-02-11 16:06 ` [net-next PATCH 1/2] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
@ 2019-02-11 16:55   ` Matthew Wilcox
  2019-02-12 10:06     ` Jesper Dangaard Brouer
  2019-02-11 20:16   ` Andrew Morton
  1 sibling, 1 reply; 9+ messages in thread
From: Matthew Wilcox @ 2019-02-11 16:55 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: netdev, linux-mm, Toke Høiland-Jørgensen,
	Ilias Apalodimas, Saeed Mahameed, Andrew Morton, mgorman,
	David S. Miller, Tariq Toukan

On Mon, Feb 11, 2019 at 05:06:46PM +0100, Jesper Dangaard Brouer wrote:
> The page_pool API is using page->private to store DMA addresses.
> As pointed out by David Miller we can't use that on 32-bit architectures
> with 64-bit DMA
> 
> This patch adds a new dma_addr_t struct to allow storing DMA addresses
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>

Reviewed-by: Matthew Wilcox <willy@infradead.org>

> +		struct {	/* page_pool used by netstack */
> +			/**
> +			 * @dma_addr: Page_pool need to store DMA-addr, and

s/need/needs/

> +			 * cannot use @private, as DMA-mappings can be 64-bit

s/DMA-mappings/DMA addresses/

> +			 * even on 32-bit Architectures.

s/A/a/

> +			 */
> +			dma_addr_t dma_addr; /* Shares area with @lru */

It also shares with @slab_list, @next, @compound_head, @pgmap and
@rcu_head.  I think it's pointless to try to document which other fields
something shares space with; the places which do it are a legacy from
before I rearranged struct page last year.  Anyone looking at this should
now be able to see "Oh, this is a union, only use the fields which are
in the union for the type of struct page I have here".

Are the pages allocated from this API ever supposed to be mapped to
userspace?

You also say in the documentation:

 * If no DMA mapping is done, then it can act as shim-layer that
 * fall-through to alloc_page.  As no state is kept on the page, the
 * regular put_page() call is sufficient.

I think this is probably a dangerous precedent to set.  Better to require
exactly one call to page_pool_put_page() (with the understanding that the
refcount may be elevated, so this may not be the final free of the page,
but the page will no longer be usable for its page_pool purpose).

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [net-next PATCH 2/2] net: page_pool: don't use page->private to store dma_addr_t
  2019-02-11 16:06 ` [net-next PATCH 2/2] net: page_pool: don't use page->private to store dma_addr_t Jesper Dangaard Brouer
@ 2019-02-11 19:31   ` Alexander Duyck
  2019-02-12  8:23     ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 9+ messages in thread
From: Alexander Duyck @ 2019-02-11 19:31 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Netdev, linux-mm, Toke Høiland-Jørgensen,
	Ilias Apalodimas, Matthew Wilcox, Saeed Mahameed, Andrew Morton,
	Mel Gorman, David S. Miller, Tariq Toukan

On Mon, Feb 11, 2019 at 8:07 AM Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
>
> From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
>
> As pointed out by David Miller the current page_pool implementation
> stores dma_addr_t in page->private.
> This won't work on 32-bit platforms with 64-bit DMA addresses since the
> page->private is an unsigned long and the dma_addr_t a u64.
>
> A previous patch is adding dma_addr_t on struct page to accommodate this.
> This patch adapts the page_pool related functions to use the newly added
> struct for storing and retrieving DMA addresses from network drivers.
>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
>  net/core/page_pool.c |   13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 43a932cb609b..897a69a1477e 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -136,7 +136,9 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
>         if (!(pool->p.flags & PP_FLAG_DMA_MAP))
>                 goto skip_dma_map;
>
> -       /* Setup DMA mapping: use page->private for DMA-addr
> +       /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
> +        * since dma_addr_t can be either 32 or 64 bits and does not always fit
> +        * into page private data (i.e 32bit cpu with 64bit DMA caps)
>          * This mapping is kept for lifetime of page, until leaving pool.
>          */
>         dma = dma_map_page(pool->p.dev, page, 0,
> @@ -146,7 +148,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
>                 put_page(page);
>                 return NULL;
>         }
> -       set_page_private(page, dma); /* page->private = dma; */
> +       page->dma_addr = dma;
>
>  skip_dma_map:
>         /* When page just alloc'ed is should/must have refcnt 1. */
> @@ -175,13 +177,16 @@ EXPORT_SYMBOL(page_pool_alloc_pages);
>  static void __page_pool_clean_page(struct page_pool *pool,
>                                    struct page *page)
>  {
> +       dma_addr_t dma;
> +
>         if (!(pool->p.flags & PP_FLAG_DMA_MAP))
>                 return;
>
> +       dma = page->dma_addr;
>         /* DMA unmap */
> -       dma_unmap_page(pool->p.dev, page_private(page),
> +       dma_unmap_page(pool->p.dev, dma,
>                        PAGE_SIZE << pool->p.order, pool->p.dma_dir);
> -       set_page_private(page, 0);
> +       page->dma_addr = 0;
>  }
>
>  /* Return a page to the page allocator, cleaning up our state */

This comment is unrelated to this patch specifically, but applies more
generally to the page_pool use of dma_unmap_page.

So just looking at this I am pretty sure the use of just
dma_unmap_page isn't correct here. You should probably be using
dma_unmap_page_attrs and specifically be passing the attribute
DMA_ATTR_SKIP_CPU_SYNC so that you can tear down the mapping without
invalidating the contents of the page.

This is something that will work for most cases but if you run into a
case where this is used with SWIOTLB in bounce buffer mode you would
end up potentially corrupting data on the unmap call.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [net-next PATCH 1/2] mm: add dma_addr_t to struct page
  2019-02-11 16:06 ` [net-next PATCH 1/2] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
  2019-02-11 16:55   ` Matthew Wilcox
@ 2019-02-11 20:16   ` Andrew Morton
  2019-02-12  8:28     ` Jesper Dangaard Brouer
  1 sibling, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2019-02-11 20:16 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: netdev, linux-mm, Toke Høiland-Jørgensen,
	Ilias Apalodimas, willy, Saeed Mahameed, mgorman,
	David S. Miller, Tariq Toukan

On Mon, 11 Feb 2019 17:06:46 +0100 Jesper Dangaard Brouer <brouer@redhat.com> wrote:

> The page_pool API is using page->private to store DMA addresses.
> As pointed out by David Miller we can't use that on 32-bit architectures
> with 64-bit DMA
> 
> This patch adds a new dma_addr_t struct to allow storing DMA addresses
> 
> ..
>
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -95,6 +95,14 @@ struct page {
>  			 */
>  			unsigned long private;
>  		};
> +		struct {	/* page_pool used by netstack */
> +			/**
> +			 * @dma_addr: Page_pool need to store DMA-addr, and
> +			 * cannot use @private, as DMA-mappings can be 64-bit
> +			 * even on 32-bit Architectures.
> +			 */

This comment is a bit awkward.  The discussion about why it doesn't use
->private is uninteresting going forward and is more material for a
changelog.

How about

			/**
			 * @dma_addr: page_pool requires a 64-bit value even on
			 * 32-bit architectures.
			 */

Otherwise,

Acked-by: Andrew Morton <akpm@linux-foundation.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [net-next PATCH 2/2] net: page_pool: don't use page->private to store dma_addr_t
  2019-02-11 19:31   ` Alexander Duyck
@ 2019-02-12  8:23     ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-12  8:23 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: Netdev, linux-mm, Toke Høiland-Jørgensen,
	Ilias Apalodimas, Matthew Wilcox, Saeed Mahameed, Andrew Morton,
	Mel Gorman, David S. Miller, Tariq Toukan, brouer

On Mon, 11 Feb 2019 11:31:13 -0800
Alexander Duyck <alexander.duyck@gmail.com> wrote:

> On Mon, Feb 11, 2019 at 8:07 AM Jesper Dangaard Brouer
> <brouer@redhat.com> wrote:
> >
> > From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> >
> > As pointed out by David Miller the current page_pool implementation
> > stores dma_addr_t in page->private.
> > This won't work on 32-bit platforms with 64-bit DMA addresses since the
> > page->private is an unsigned long and the dma_addr_t a u64.
> >
> > A previous patch is adding dma_addr_t on struct page to accommodate this.
> > This patch adapts the page_pool related functions to use the newly added
> > struct for storing and retrieving DMA addresses from network drivers.
> >
> > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > ---
> >  net/core/page_pool.c |   13 +++++++++----
> >  1 file changed, 9 insertions(+), 4 deletions(-)
> >
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index 43a932cb609b..897a69a1477e 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -136,7 +136,9 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> >         if (!(pool->p.flags & PP_FLAG_DMA_MAP))
> >                 goto skip_dma_map;
> >
> > -       /* Setup DMA mapping: use page->private for DMA-addr
> > +       /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
> > +        * since dma_addr_t can be either 32 or 64 bits and does not always fit
> > +        * into page private data (i.e 32bit cpu with 64bit DMA caps)
> >          * This mapping is kept for lifetime of page, until leaving pool.
> >          */
> >         dma = dma_map_page(pool->p.dev, page, 0,
> > @@ -146,7 +148,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> >                 put_page(page);
> >                 return NULL;
> >         }
> > -       set_page_private(page, dma); /* page->private = dma; */
> > +       page->dma_addr = dma;
> >
> >  skip_dma_map:
> >         /* When page just alloc'ed is should/must have refcnt 1. */
> > @@ -175,13 +177,16 @@ EXPORT_SYMBOL(page_pool_alloc_pages);
> >  static void __page_pool_clean_page(struct page_pool *pool,
> >                                    struct page *page)
> >  {
> > +       dma_addr_t dma;
> > +
> >         if (!(pool->p.flags & PP_FLAG_DMA_MAP))
> >                 return;
> >
> > +       dma = page->dma_addr;
> >         /* DMA unmap */
> > -       dma_unmap_page(pool->p.dev, page_private(page),
> > +       dma_unmap_page(pool->p.dev, dma,
> >                        PAGE_SIZE << pool->p.order, pool->p.dma_dir);
> > -       set_page_private(page, 0);
> > +       page->dma_addr = 0;
> >  }
> >
> >  /* Return a page to the page allocator, cleaning up our state */  
> 
> This comment is unrelated to this patch specifically, but applies more
> generally to the page_pool use of dma_unmap_page.
> 
> So just looking at this I am pretty sure the use of just
> dma_unmap_page isn't correct here. You should probably be using
> dma_unmap_page_attrs and specifically be passing the attribute
> DMA_ATTR_SKIP_CPU_SYNC so that you can tear down the mapping without
> invalidating the contents of the page.

It is unrelated to this patch, but YES you are right.  I was aware of
this, but it slipped my mind.  You were the one that taught me the
principle page_pool is based on, that we keep the DMA mapping, but
instead let the driver perform the DMA-sync operations.

Thanks for catching this!  I actually think that the current small
ARM64 board we are playing with at the moment (Espressobin) will have a
performance benefit from doing this.


> This is something that will work for most cases but if you run into a
> case where this is used with SWIOTLB in bounce buffer mode you would
> end up potentially corrupting data on the unmap call.

I do have a board Machiattobin, that operate with SWIOTLB bounce
buffers, which it is not suppose to, and something that I'll hopefully
get a round to fix soon.  But we have not implemented use of page_pool
on that board yet. So, thanks for catching this.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [net-next PATCH 1/2] mm: add dma_addr_t to struct page
  2019-02-11 20:16   ` Andrew Morton
@ 2019-02-12  8:28     ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-12  8:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: netdev, linux-mm, Toke Høiland-Jørgensen,
	Ilias Apalodimas, willy, Saeed Mahameed, mgorman,
	David S. Miller, Tariq Toukan, brouer

On Mon, 11 Feb 2019 12:16:24 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:

> On Mon, 11 Feb 2019 17:06:46 +0100 Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> 
> > The page_pool API is using page->private to store DMA addresses.
> > As pointed out by David Miller we can't use that on 32-bit architectures
> > with 64-bit DMA
> > 
> > This patch adds a new dma_addr_t struct to allow storing DMA addresses
> > 
> > ..
> >
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -95,6 +95,14 @@ struct page {
> >  			 */
> >  			unsigned long private;
> >  		};
> > +		struct {	/* page_pool used by netstack */
> > +			/**
> > +			 * @dma_addr: Page_pool need to store DMA-addr, and
> > +			 * cannot use @private, as DMA-mappings can be 64-bit
> > +			 * even on 32-bit Architectures.
> > +			 */  
> 
> This comment is a bit awkward.  The discussion about why it doesn't use
> ->private is uninteresting going forward and is more material for a  
> changelog.
> 
> How about
> 
> 			/**
> 			 * @dma_addr: page_pool requires a 64-bit value even on
> 			 * 32-bit architectures.
> 			 */

Much better, I'll use that!

> Otherwise,
> 
> Acked-by: Andrew Morton <akpm@linux-foundation.org>

Thanks!

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [net-next PATCH 1/2] mm: add dma_addr_t to struct page
  2019-02-11 16:55   ` Matthew Wilcox
@ 2019-02-12 10:06     ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-12 10:06 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: netdev, linux-mm, Toke Høiland-Jørgensen,
	Ilias Apalodimas, Saeed Mahameed, Andrew Morton, mgorman,
	David S. Miller, Tariq Toukan, brouer, Willem de Bruijn

On Mon, 11 Feb 2019 08:55:51 -0800
Matthew Wilcox <willy@infradead.org> wrote:

> On Mon, Feb 11, 2019 at 05:06:46PM +0100, Jesper Dangaard Brouer wrote:
> > The page_pool API is using page->private to store DMA addresses.
> > As pointed out by David Miller we can't use that on 32-bit architectures
> > with 64-bit DMA
> > 
> > This patch adds a new dma_addr_t struct to allow storing DMA addresses
> > 
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>  
> 
> Reviewed-by: Matthew Wilcox <willy@infradead.org>
> 
> > +		struct {	/* page_pool used by netstack */
> > +			/**
> > +			 * @dma_addr: Page_pool need to store DMA-addr, and  
> 
> s/need/needs/
> 
> > +			 * cannot use @private, as DMA-mappings can be 64-bit  
> 
> s/DMA-mappings/DMA addresses/
> 
> > +			 * even on 32-bit Architectures.  
> 
> s/A/a/

Yes, that comments needs improvement. I think I'll use AKPMs suggestion.


> > +			 */
> > +			dma_addr_t dma_addr; /* Shares area with @lru */  
> 
> It also shares with @slab_list, @next, @compound_head, @pgmap and
> @rcu_head.  I think it's pointless to try to document which other fields
> something shares space with; the places which do it are a legacy from
> before I rearranged struct page last year.  Anyone looking at this should
> now be able to see "Oh, this is a union, only use the fields which are
> in the union for the type of struct page I have here".

I agree, I'll strip that comment.

 
> Are the pages allocated from this API ever supposed to be mapped to
> userspace?

I would like to know what fields on struct-page we cannot touch if we
want to keep this a possibility?

That said, I hope we don't need to do this. But as I integrate this
further into the netstack code, we might have to support this, or
at-least release the page_pool "state" (currently only DMA-addr) before
the skb_zcopy code path.  First iteration will not do zero-copy stuff,
and later I'll coordinate with Willem how to add this, if needed.

My general opinion is that if an end-user want to have pages mapped to
userspace, then page_pool (MEM_TYPE_PAGE_POOL) is not the right choice,
but instead use MEM_TYPE_ZERO_COPY (see enum xdp_mem_type).  We are
generally working towards allowing NIC drivers to have a different
memory type per RX-ring.


> You also say in the documentation:
> 
>  * If no DMA mapping is done, then it can act as shim-layer that
>  * fall-through to alloc_page.  As no state is kept on the page, the
>  * regular put_page() call is sufficient.
> 
> I think this is probably a dangerous precedent to set.  Better to require
> exactly one call to page_pool_put_page() (with the understanding that the
> refcount may be elevated, so this may not be the final free of the page,
> but the page will no longer be usable for its page_pool purpose).

Yes, this actually how it is implemented today, and the comment should
be improved.  Today __page_pool_put_page() in case of refcount is
elevated do call __page_pool_clean_page() to release page page_pool
state, and is in principle no longer "usable" for page_pool purposes.
BUT I have considered removing this, as it might not fit how want to
use the API. In our current RFC we found a need for (and introduced) a
page_pool_unmap_page() call (that call __page_pool_clean_page()), when
driver hits cases where the code path doesn't have a call-back to
page_pool_put_page() but instead end-up calling put_page().

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-02-12 10:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-11 16:06 [net-next PATCH 0/2] Fix page_pool API and dma address storage Jesper Dangaard Brouer
2019-02-11 16:06 ` [net-next PATCH 1/2] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
2019-02-11 16:55   ` Matthew Wilcox
2019-02-12 10:06     ` Jesper Dangaard Brouer
2019-02-11 20:16   ` Andrew Morton
2019-02-12  8:28     ` Jesper Dangaard Brouer
2019-02-11 16:06 ` [net-next PATCH 2/2] net: page_pool: don't use page->private to store dma_addr_t Jesper Dangaard Brouer
2019-02-11 19:31   ` Alexander Duyck
2019-02-12  8:23     ` Jesper Dangaard Brouer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).