All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: David Laight <David.Laight@ACULAB.COM>
Cc: 'Matthew Wilcox' <willy@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>,
	"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Matteo Croce <mcroce@linux.microsoft.com>,
	Grygorii Strashko <grygorii.strashko@ti.com>,
	Arnd Bergmann <arnd@kernel.org>, "Christoph Hellwig" <hch@lst.de>,
	brouer@redhat.com
Subject: Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems
Date: Thu, 15 Apr 2021 20:08:32 +0200	[thread overview]
Message-ID: <20210415200832.32796445@carbon> (raw)
In-Reply-To: <a50c3156fe8943ef964db4345344862f@AcuMS.aculab.com>

On Wed, 14 Apr 2021 21:56:39 +0000
David Laight <David.Laight@ACULAB.COM> wrote:

> From: Matthew Wilcox
> > Sent: 14 April 2021 22:36
> > 
> > On Wed, Apr 14, 2021 at 09:13:22PM +0200, Jesper Dangaard Brouer wrote:  
> > > (If others want to reproduce).  First I could not reproduce on ARM32.
> > > Then I found out that enabling CONFIG_XEN on ARCH=arm was needed to
> > > cause the issue by enabling CONFIG_ARCH_DMA_ADDR_T_64BIT.  
> > 
> > hmmm ... you should be able to provoke it by enabling ARM_LPAE,
> > which selects PHYS_ADDR_T_64BIT, and
> > 
> > config ARCH_DMA_ADDR_T_64BIT
> >         def_bool 64BIT || PHYS_ADDR_T_64BIT
> >   
> > >  struct page {
> > >         long unsigned int          flags;                /*     0     4 */
> > >
> > >         /* XXX 4 bytes hole, try to pack */
> > >
> > >         union {
> > >                 struct {
> > >                         struct list_head lru;            /*     8     8 */
> > >                         struct address_space * mapping;  /*    16     4 */
> > >                         long unsigned int index;         /*    20     4 */
> > >                         long unsigned int private;       /*    24     4 */
> > >                 };                                       /*     8    20 */
> > >                 struct {
> > >                         dma_addr_t dma_addr  
> 
> Adding __packed here will remove the 4 byte hole before the union
> and the compiler seems clever enough to know that anything following
> a 'long' must also be 'long' aligned.

Played with __packed in below patch, and I can confirm it seems to work.

> So you don't get anything horrid like byte accesses.
> On 64bit dma_addr will remain 64bit aligned.
> On arm32 dma_addr will be 32bit aligned - but forcing two 32bit access
> won't make any difference.

See below patch.  Where I swap32 the dma address to satisfy
page->compound having bit zero cleared. (It is the simplest fix I could
come up with).


[PATCH] page_pool: handling 32-bit archs with 64-bit dma_addr_t

From: Jesper Dangaard Brouer <brouer@redhat.com>

Workaround for storing 64-bit DMA-addr on 32-bit machines in struct
page.  The page->dma_addr share area with page->compound_head which
use bit zero to mark compound pages. This is okay, as DMA-addr are
aligned pointers which have bit zero cleared.

In the 32-bit case, page->compound_head is 32-bit.  Thus, when
dma_addr_t is 64-bit it will be located in top 32-bit.  Solve by
swapping dma_addr 32-bit segments.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 include/linux/mm_types.h |    2 +-
 include/linux/types.h    |    1 +
 include/net/page_pool.h  |   21 ++++++++++++++++++++-
 net/core/page_pool.c     |    8 +++++---
 4 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6613b26a8894..27406e3b1e1b 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -100,7 +100,7 @@ struct page {
 			 * @dma_addr: might require a 64-bit value even on
 			 * 32-bit architectures.
 			 */
-			dma_addr_t dma_addr;
+			dma_addr_t dma_addr __packed;
 		};
 		struct {	/* slab, slob and slub */
 			union {
diff --git a/include/linux/types.h b/include/linux/types.h
index ac825ad90e44..65fd5d630016 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -141,6 +141,7 @@ typedef u64 blkcnt_t;
  */
 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
 typedef u64 dma_addr_t;
+//typedef u64 __attribute__((aligned(sizeof(void *)))) dma_addr_t;
 #else
 typedef u32 dma_addr_t;
 #endif
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index b5b195305346..c2329088665c 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -196,9 +196,28 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
 	page_pool_put_full_page(pool, page, true);
 }
 
+static inline
+dma_addr_t page_pool_dma_addr_read(dma_addr_t dma_addr)
+{
+	/* Workaround for storing 64-bit DMA-addr on 32-bit machines in struct
+	 * page.  The page->dma_addr share area with page->compound_head which
+	 * use bit zero to mark compound pages. This is okay, as DMA-addr are
+	 * aligned pointers which have bit zero cleared.
+	 *
+	 * In the 32-bit case, page->compound_head is 32-bit.  Thus, when
+	 * dma_addr_t is 64-bit it will be located in top 32-bit.  Solve by
+	 * swapping dma_addr 32-bit segments.
+	 */
+#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+	if (sizeof(long unsigned int) == 4) /* 32-bit system */
+		dma_addr = (dma_addr << 32) | (dma_addr >> 32);
+#endif
+	return dma_addr;
+}
+
 static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
 {
-	return page->dma_addr;
+	return page_pool_dma_addr_read(page->dma_addr);
 }
 
 static inline bool is_page_pool_compiled_in(void)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ad8b0707af04..813598ea23f6 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -174,8 +174,10 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
 					  struct page *page,
 					  unsigned int dma_sync_size)
 {
+	dma_addr_t dma = page_pool_dma_addr_read(page->dma_addr);
+
 	dma_sync_size = min(dma_sync_size, pool->p.max_len);
-	dma_sync_single_range_for_device(pool->p.dev, page->dma_addr,
+	dma_sync_single_range_for_device(pool->p.dev, dma,
 					 pool->p.offset, dma_sync_size,
 					 pool->p.dma_dir);
 }
@@ -226,7 +228,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 		put_page(page);
 		return NULL;
 	}
-	page->dma_addr = dma;
+	page->dma_addr = page_pool_dma_addr_read(dma);
 
 	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
 		page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
@@ -294,7 +296,7 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
 		 */
 		goto skip_dma_unmap;
 
-	dma = page->dma_addr;
+	dma = page_pool_dma_addr_read(page->dma_addr);
 
 	/* When page is unmapped, it cannot be returned our pool */
 	dma_unmap_page_attrs(pool->p.dev, dma,


--
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


WARNING: multiple messages have this Message-ID (diff)
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: David Laight <David.Laight@ACULAB.COM>
Cc: Arnd Bergmann <arnd@kernel.org>,
	Grygorii Strashko <grygorii.strashko@ti.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	'Matthew Wilcox' <willy@infradead.org>,
	"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	brouer@redhat.com, Matteo Croce <mcroce@linux.microsoft.com>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	Christoph Hellwig <hch@lst.de>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems
Date: Thu, 15 Apr 2021 20:08:32 +0200	[thread overview]
Message-ID: <20210415200832.32796445@carbon> (raw)
In-Reply-To: <a50c3156fe8943ef964db4345344862f@AcuMS.aculab.com>

On Wed, 14 Apr 2021 21:56:39 +0000
David Laight <David.Laight@ACULAB.COM> wrote:

> From: Matthew Wilcox
> > Sent: 14 April 2021 22:36
> > 
> > On Wed, Apr 14, 2021 at 09:13:22PM +0200, Jesper Dangaard Brouer wrote:  
> > > (If others want to reproduce).  First I could not reproduce on ARM32.
> > > Then I found out that enabling CONFIG_XEN on ARCH=arm was needed to
> > > cause the issue by enabling CONFIG_ARCH_DMA_ADDR_T_64BIT.  
> > 
> > hmmm ... you should be able to provoke it by enabling ARM_LPAE,
> > which selects PHYS_ADDR_T_64BIT, and
> > 
> > config ARCH_DMA_ADDR_T_64BIT
> >         def_bool 64BIT || PHYS_ADDR_T_64BIT
> >   
> > >  struct page {
> > >         long unsigned int          flags;                /*     0     4 */
> > >
> > >         /* XXX 4 bytes hole, try to pack */
> > >
> > >         union {
> > >                 struct {
> > >                         struct list_head lru;            /*     8     8 */
> > >                         struct address_space * mapping;  /*    16     4 */
> > >                         long unsigned int index;         /*    20     4 */
> > >                         long unsigned int private;       /*    24     4 */
> > >                 };                                       /*     8    20 */
> > >                 struct {
> > >                         dma_addr_t dma_addr  
> 
> Adding __packed here will remove the 4 byte hole before the union
> and the compiler seems clever enough to know that anything following
> a 'long' must also be 'long' aligned.

Played with __packed in below patch, and I can confirm it seems to work.

> So you don't get anything horrid like byte accesses.
> On 64bit dma_addr will remain 64bit aligned.
> On arm32 dma_addr will be 32bit aligned - but forcing two 32bit access
> won't make any difference.

See below patch.  Where I swap32 the dma address to satisfy
page->compound having bit zero cleared. (It is the simplest fix I could
come up with).


[PATCH] page_pool: handling 32-bit archs with 64-bit dma_addr_t

From: Jesper Dangaard Brouer <brouer@redhat.com>

Workaround for storing 64-bit DMA-addr on 32-bit machines in struct
page.  The page->dma_addr share area with page->compound_head which
use bit zero to mark compound pages. This is okay, as DMA-addr are
aligned pointers which have bit zero cleared.

In the 32-bit case, page->compound_head is 32-bit.  Thus, when
dma_addr_t is 64-bit it will be located in top 32-bit.  Solve by
swapping dma_addr 32-bit segments.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 include/linux/mm_types.h |    2 +-
 include/linux/types.h    |    1 +
 include/net/page_pool.h  |   21 ++++++++++++++++++++-
 net/core/page_pool.c     |    8 +++++---
 4 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6613b26a8894..27406e3b1e1b 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -100,7 +100,7 @@ struct page {
 			 * @dma_addr: might require a 64-bit value even on
 			 * 32-bit architectures.
 			 */
-			dma_addr_t dma_addr;
+			dma_addr_t dma_addr __packed;
 		};
 		struct {	/* slab, slob and slub */
 			union {
diff --git a/include/linux/types.h b/include/linux/types.h
index ac825ad90e44..65fd5d630016 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -141,6 +141,7 @@ typedef u64 blkcnt_t;
  */
 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
 typedef u64 dma_addr_t;
+//typedef u64 __attribute__((aligned(sizeof(void *)))) dma_addr_t;
 #else
 typedef u32 dma_addr_t;
 #endif
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index b5b195305346..c2329088665c 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -196,9 +196,28 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
 	page_pool_put_full_page(pool, page, true);
 }
 
+static inline
+dma_addr_t page_pool_dma_addr_read(dma_addr_t dma_addr)
+{
+	/* Workaround for storing 64-bit DMA-addr on 32-bit machines in struct
+	 * page.  The page->dma_addr share area with page->compound_head which
+	 * use bit zero to mark compound pages. This is okay, as DMA-addr are
+	 * aligned pointers which have bit zero cleared.
+	 *
+	 * In the 32-bit case, page->compound_head is 32-bit.  Thus, when
+	 * dma_addr_t is 64-bit it will be located in top 32-bit.  Solve by
+	 * swapping dma_addr 32-bit segments.
+	 */
+#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+	if (sizeof(long unsigned int) == 4) /* 32-bit system */
+		dma_addr = (dma_addr << 32) | (dma_addr >> 32);
+#endif
+	return dma_addr;
+}
+
 static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
 {
-	return page->dma_addr;
+	return page_pool_dma_addr_read(page->dma_addr);
 }
 
 static inline bool is_page_pool_compiled_in(void)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ad8b0707af04..813598ea23f6 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -174,8 +174,10 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
 					  struct page *page,
 					  unsigned int dma_sync_size)
 {
+	dma_addr_t dma = page_pool_dma_addr_read(page->dma_addr);
+
 	dma_sync_size = min(dma_sync_size, pool->p.max_len);
-	dma_sync_single_range_for_device(pool->p.dev, page->dma_addr,
+	dma_sync_single_range_for_device(pool->p.dev, dma,
 					 pool->p.offset, dma_sync_size,
 					 pool->p.dma_dir);
 }
@@ -226,7 +228,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 		put_page(page);
 		return NULL;
 	}
-	page->dma_addr = dma;
+	page->dma_addr = page_pool_dma_addr_read(dma);
 
 	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
 		page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
@@ -294,7 +296,7 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
 		 */
 		goto skip_dma_unmap;
 
-	dma = page->dma_addr;
+	dma = page_pool_dma_addr_read(page->dma_addr);
 
 	/* When page is unmapped, it cannot be returned our pool */
 	dma_unmap_page_attrs(pool->p.dev, dma,


--
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


WARNING: multiple messages have this Message-ID (diff)
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: David Laight <David.Laight@ACULAB.COM>
Cc: 'Matthew Wilcox' <willy@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Matteo Croce <mcroce@linux.microsoft.com>,
	Grygorii Strashko <grygorii.strashko@ti.com>,
	Arnd Bergmann <arnd@kernel.org>, "Christoph Hellwig" <hch@lst.de>,
	brouer@redhat.com
Subject: Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems
Date: Thu, 15 Apr 2021 20:08:32 +0200	[thread overview]
Message-ID: <20210415200832.32796445@carbon> (raw)
In-Reply-To: <a50c3156fe8943ef964db4345344862f@AcuMS.aculab.com>

On Wed, 14 Apr 2021 21:56:39 +0000
David Laight <David.Laight@ACULAB.COM> wrote:

> From: Matthew Wilcox
> > Sent: 14 April 2021 22:36
> > 
> > On Wed, Apr 14, 2021 at 09:13:22PM +0200, Jesper Dangaard Brouer wrote:  
> > > (If others want to reproduce).  First I could not reproduce on ARM32.
> > > Then I found out that enabling CONFIG_XEN on ARCH=arm was needed to
> > > cause the issue by enabling CONFIG_ARCH_DMA_ADDR_T_64BIT.  
> > 
> > hmmm ... you should be able to provoke it by enabling ARM_LPAE,
> > which selects PHYS_ADDR_T_64BIT, and
> > 
> > config ARCH_DMA_ADDR_T_64BIT
> >         def_bool 64BIT || PHYS_ADDR_T_64BIT
> >   
> > >  struct page {
> > >         long unsigned int          flags;                /*     0     4 */
> > >
> > >         /* XXX 4 bytes hole, try to pack */
> > >
> > >         union {
> > >                 struct {
> > >                         struct list_head lru;            /*     8     8 */
> > >                         struct address_space * mapping;  /*    16     4 */
> > >                         long unsigned int index;         /*    20     4 */
> > >                         long unsigned int private;       /*    24     4 */
> > >                 };                                       /*     8    20 */
> > >                 struct {
> > >                         dma_addr_t dma_addr  
> 
> Adding __packed here will remove the 4 byte hole before the union
> and the compiler seems clever enough to know that anything following
> a 'long' must also be 'long' aligned.

Played with __packed in below patch, and I can confirm it seems to work.

> So you don't get anything horrid like byte accesses.
> On 64bit dma_addr will remain 64bit aligned.
> On arm32 dma_addr will be 32bit aligned - but forcing two 32bit access
> won't make any difference.

See below patch.  Where I swap32 the dma address to satisfy
page->compound having bit zero cleared. (It is the simplest fix I could
come up with).


[PATCH] page_pool: handling 32-bit archs with 64-bit dma_addr_t

From: Jesper Dangaard Brouer <brouer@redhat.com>

Workaround for storing 64-bit DMA-addr on 32-bit machines in struct
page.  The page->dma_addr share area with page->compound_head which
use bit zero to mark compound pages. This is okay, as DMA-addr are
aligned pointers which have bit zero cleared.

In the 32-bit case, page->compound_head is 32-bit.  Thus, when
dma_addr_t is 64-bit it will be located in top 32-bit.  Solve by
swapping dma_addr 32-bit segments.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 include/linux/mm_types.h |    2 +-
 include/linux/types.h    |    1 +
 include/net/page_pool.h  |   21 ++++++++++++++++++++-
 net/core/page_pool.c     |    8 +++++---
 4 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6613b26a8894..27406e3b1e1b 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -100,7 +100,7 @@ struct page {
 			 * @dma_addr: might require a 64-bit value even on
 			 * 32-bit architectures.
 			 */
-			dma_addr_t dma_addr;
+			dma_addr_t dma_addr __packed;
 		};
 		struct {	/* slab, slob and slub */
 			union {
diff --git a/include/linux/types.h b/include/linux/types.h
index ac825ad90e44..65fd5d630016 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -141,6 +141,7 @@ typedef u64 blkcnt_t;
  */
 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
 typedef u64 dma_addr_t;
+//typedef u64 __attribute__((aligned(sizeof(void *)))) dma_addr_t;
 #else
 typedef u32 dma_addr_t;
 #endif
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index b5b195305346..c2329088665c 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -196,9 +196,28 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
 	page_pool_put_full_page(pool, page, true);
 }
 
+static inline
+dma_addr_t page_pool_dma_addr_read(dma_addr_t dma_addr)
+{
+	/* Workaround for storing 64-bit DMA-addr on 32-bit machines in struct
+	 * page.  The page->dma_addr share area with page->compound_head which
+	 * use bit zero to mark compound pages. This is okay, as DMA-addr are
+	 * aligned pointers which have bit zero cleared.
+	 *
+	 * In the 32-bit case, page->compound_head is 32-bit.  Thus, when
+	 * dma_addr_t is 64-bit it will be located in top 32-bit.  Solve by
+	 * swapping dma_addr 32-bit segments.
+	 */
+#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+	if (sizeof(long unsigned int) == 4) /* 32-bit system */
+		dma_addr = (dma_addr << 32) | (dma_addr >> 32);
+#endif
+	return dma_addr;
+}
+
 static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
 {
-	return page->dma_addr;
+	return page_pool_dma_addr_read(page->dma_addr);
 }
 
 static inline bool is_page_pool_compiled_in(void)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ad8b0707af04..813598ea23f6 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -174,8 +174,10 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
 					  struct page *page,
 					  unsigned int dma_sync_size)
 {
+	dma_addr_t dma = page_pool_dma_addr_read(page->dma_addr);
+
 	dma_sync_size = min(dma_sync_size, pool->p.max_len);
-	dma_sync_single_range_for_device(pool->p.dev, page->dma_addr,
+	dma_sync_single_range_for_device(pool->p.dev, dma,
 					 pool->p.offset, dma_sync_size,
 					 pool->p.dma_dir);
 }
@@ -226,7 +228,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 		put_page(page);
 		return NULL;
 	}
-	page->dma_addr = dma;
+	page->dma_addr = page_pool_dma_addr_read(dma);
 
 	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
 		page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
@@ -294,7 +296,7 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
 		 */
 		goto skip_dma_unmap;
 
-	dma = page->dma_addr;
+	dma = page_pool_dma_addr_read(page->dma_addr);
 
 	/* When page is unmapped, it cannot be returned our pool */
 	dma_unmap_page_attrs(pool->p.dev, dma,


--
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-04-15 18:08 UTC|newest]

Thread overview: 105+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-10 20:52 [PATCH 0/1] Fix struct page layout on 32-bit systems Matthew Wilcox (Oracle)
2021-04-10 20:52 ` Matthew Wilcox (Oracle)
2021-04-10 20:52 ` Matthew Wilcox (Oracle)
2021-04-10 20:52 ` [PATCH 1/1] mm: " Matthew Wilcox (Oracle)
2021-04-10 20:52   ` Matthew Wilcox (Oracle)
2021-04-10 20:52   ` Matthew Wilcox (Oracle)
2021-04-11  9:43   ` Jesper Dangaard Brouer
2021-04-11  9:43     ` Jesper Dangaard Brouer
2021-04-11  9:43     ` Jesper Dangaard Brouer
2021-04-11 10:33     ` Matthew Wilcox
2021-04-11 10:33       ` Matthew Wilcox
2021-04-11 10:33       ` Matthew Wilcox
2021-04-12  1:15       ` Matthew Wilcox
2021-04-12  1:15         ` Matthew Wilcox
2021-04-12  1:15         ` Matthew Wilcox
2021-04-14  8:10         ` Jesper Dangaard Brouer
2021-04-14  8:10           ` Jesper Dangaard Brouer
2021-04-14  8:10           ` Jesper Dangaard Brouer
2021-04-14 11:50           ` Matthew Wilcox
2021-04-14 11:50             ` Matthew Wilcox
2021-04-14 11:50             ` Matthew Wilcox
2021-04-14 11:56             ` Ilias Apalodimas
2021-04-14 11:56               ` Ilias Apalodimas
2021-04-14 11:56               ` Ilias Apalodimas
2021-04-14 15:52             ` David Laight
2021-04-14 15:52               ` David Laight
2021-04-14 15:52               ` David Laight
2021-04-14 15:52               ` David Laight
2021-04-14 19:13             ` Jesper Dangaard Brouer
2021-04-14 19:13               ` Jesper Dangaard Brouer
2021-04-14 19:13               ` Jesper Dangaard Brouer
2021-04-14 21:35               ` Matthew Wilcox
2021-04-14 21:35                 ` Matthew Wilcox
2021-04-14 21:35                 ` Matthew Wilcox
2021-04-14 21:56                 ` David Laight
2021-04-14 21:56                   ` David Laight
2021-04-14 21:56                   ` David Laight
2021-04-14 21:56                   ` David Laight
2021-04-15 18:08                   ` Jesper Dangaard Brouer [this message]
2021-04-15 18:08                     ` Jesper Dangaard Brouer
2021-04-15 18:08                     ` Jesper Dangaard Brouer
2021-04-15 18:08                     ` Jesper Dangaard Brouer
2021-04-15 18:21                     ` Matthew Wilcox
2021-04-15 18:21                       ` Matthew Wilcox
2021-04-15 18:21                       ` Matthew Wilcox
2021-04-15 18:21                       ` Matthew Wilcox
2021-04-15 21:11                       ` David Laight
2021-04-15 21:11                         ` David Laight
2021-04-15 21:11                         ` David Laight
2021-04-15 21:11                         ` David Laight
2021-04-15 22:22                         ` Matthew Wilcox
2021-04-15 22:22                           ` Matthew Wilcox
2021-04-15 22:22                           ` Matthew Wilcox
2021-04-15 22:22                           ` Matthew Wilcox
2021-04-16  7:32                           ` David Laight
2021-04-16  7:32                             ` David Laight
2021-04-16  7:32                             ` David Laight
2021-04-16  7:32                             ` David Laight
2021-04-16 11:05                             ` Matthew Wilcox
2021-04-16 11:05                               ` Matthew Wilcox
2021-04-16 11:05                               ` Matthew Wilcox
2021-04-16 11:05                               ` Matthew Wilcox
2021-04-16 15:27                     ` Matthew Wilcox
2021-04-16 15:27                       ` Matthew Wilcox
2021-04-16 15:27                       ` Matthew Wilcox
2021-04-16 15:27                       ` Matthew Wilcox
2021-04-16 17:08                       ` Jesper Dangaard Brouer
2021-04-16 17:08                         ` Jesper Dangaard Brouer
2021-04-16 17:08                         ` Jesper Dangaard Brouer
2021-04-16 17:08                         ` Jesper Dangaard Brouer
2021-04-17  3:19                         ` Matthew Wilcox
2021-04-17  3:19                           ` Matthew Wilcox
2021-04-17  3:19                           ` Matthew Wilcox
2021-04-17  3:19                           ` Matthew Wilcox
2021-04-17 10:31                       ` Arnd Bergmann
2021-04-17 10:31                         ` Arnd Bergmann
2021-04-17 10:31                         ` Arnd Bergmann
2021-04-17 10:31                         ` Arnd Bergmann
2021-04-17 13:56                         ` Matthew Wilcox
2021-04-17 13:56                           ` Matthew Wilcox
2021-04-17 13:56                           ` Matthew Wilcox
2021-04-17 13:56                           ` Matthew Wilcox
2021-04-17 17:30                           ` Arnd Bergmann
2021-04-17 17:30                             ` Arnd Bergmann
2021-04-17 17:30                             ` Arnd Bergmann
2021-04-17 17:30                             ` Arnd Bergmann
2021-04-17 10:59                       ` David Laight
2021-04-17 10:59                         ` David Laight
2021-04-17 10:59                         ` David Laight
2021-04-17 10:59                         ` David Laight
2021-04-19  6:34                       ` Christoph Hellwig
2021-04-19  6:34                         ` Christoph Hellwig
2021-04-19  6:34                         ` Christoph Hellwig
2021-04-19  6:34                         ` Christoph Hellwig
2021-04-19  7:15                         ` Ilias Apalodimas
2021-04-19  7:15                           ` Ilias Apalodimas
2021-04-19  7:15                           ` Ilias Apalodimas
2021-04-19  7:15                           ` Ilias Apalodimas
2021-04-12 18:23     ` Matthew Wilcox
2021-04-12 18:23       ` Matthew Wilcox
2021-04-12 18:23       ` Matthew Wilcox
2021-04-13  8:21       ` David Laight
2021-04-13  8:21         ` David Laight
2021-04-13  8:21         ` David Laight
2021-04-13  8:21         ` David Laight

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210415200832.32796445@carbon \
    --to=brouer@redhat.com \
    --cc=David.Laight@ACULAB.COM \
    --cc=arnd@kernel.org \
    --cc=grygorii.strashko@ti.com \
    --cc=hch@lst.de \
    --cc=ilias.apalodimas@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mcroce@linux.microsoft.com \
    --cc=netdev@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.