* [net-next PATCH V2 1/3] mm: add dma_addr_t to struct page
2019-02-12 14:48 [net-next PATCH V2 0/3] Fix page_pool API and dma address storage Jesper Dangaard Brouer
@ 2019-02-12 14:49 ` Jesper Dangaard Brouer
2019-02-12 18:05 ` Florian Fainelli
2019-02-12 14:49 ` [net-next PATCH V2 2/3] net: page_pool: don't use page->private to store dma_addr_t Jesper Dangaard Brouer
2019-02-12 14:49 ` [net-next PATCH V2 3/3] page_pool: use DMA_ATTR_SKIP_CPU_SYNC for DMA mappings Jesper Dangaard Brouer
2 siblings, 1 reply; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-12 14:49 UTC (permalink / raw)
To: netdev, linux-mm
Cc: Toke Høiland-Jørgensen, Ilias Apalodimas, willy,
Saeed Mahameed, Alexander Duyck, Jesper Dangaard Brouer,
Andrew Morton, mgorman, David S. Miller, Tariq Toukan
The page_pool API is using page->private to store DMA addresses.
As pointed out by David Miller we can't use that on 32-bit architectures
with 64-bit DMA
This patch adds a new dma_addr_t struct to allow storing DMA addresses
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/mm_types.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 2c471a2c43fa..581737bd0878 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -95,6 +95,13 @@ struct page {
*/
unsigned long private;
};
+ struct { /* page_pool used by netstack */
+ /**
+ * @dma_addr: page_pool requires a 64-bit value even on
+ * 32-bit architectures.
+ */
+ dma_addr_t dma_addr;
+ };
struct { /* slab, slob and slub */
union {
struct list_head slab_list; /* uses lru */
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [net-next PATCH V2 1/3] mm: add dma_addr_t to struct page
2019-02-12 14:49 ` [net-next PATCH V2 1/3] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
@ 2019-02-12 18:05 ` Florian Fainelli
2019-02-12 18:19 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 9+ messages in thread
From: Florian Fainelli @ 2019-02-12 18:05 UTC (permalink / raw)
To: Jesper Dangaard Brouer, netdev, linux-mm
Cc: Toke Høiland-Jørgensen, Ilias Apalodimas, willy,
Saeed Mahameed, Alexander Duyck, Andrew Morton, mgorman,
David S. Miller, Tariq Toukan
On 2/12/19 6:49 AM, Jesper Dangaard Brouer wrote:
> The page_pool API is using page->private to store DMA addresses.
> As pointed out by David Miller we can't use that on 32-bit architectures
> with 64-bit DMA
>
> This patch adds a new dma_addr_t struct to allow storing DMA addresses
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> Acked-by: Andrew Morton <akpm@linux-foundation.org>
> ---
> include/linux/mm_types.h | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 2c471a2c43fa..581737bd0878 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -95,6 +95,13 @@ struct page {
> */
> unsigned long private;
> };
> + struct { /* page_pool used by netstack */
> + /**
> + * @dma_addr: page_pool requires a 64-bit value even on
> + * 32-bit architectures.
> + */
Nit: might require? dma_addr_t, as you mention in the commit may have a
different size based on CONFIG_ARCH_DMA_ADDR_T_64BIT.
--
Florian
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [net-next PATCH V2 1/3] mm: add dma_addr_t to struct page
2019-02-12 18:05 ` Florian Fainelli
@ 2019-02-12 18:19 ` Jesper Dangaard Brouer
2019-02-12 18:23 ` Florian Fainelli
0 siblings, 1 reply; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-12 18:19 UTC (permalink / raw)
To: Florian Fainelli
Cc: netdev, linux-mm, Toke Høiland-Jørgensen,
Ilias Apalodimas, willy, Saeed Mahameed, Alexander Duyck,
Andrew Morton, mgorman, David S. Miller, Tariq Toukan, brouer
On Tue, 12 Feb 2019 10:05:39 -0800
Florian Fainelli <f.fainelli@gmail.com> wrote:
> On 2/12/19 6:49 AM, Jesper Dangaard Brouer wrote:
> > The page_pool API is using page->private to store DMA addresses.
> > As pointed out by David Miller we can't use that on 32-bit architectures
> > with 64-bit DMA
> >
> > This patch adds a new dma_addr_t struct to allow storing DMA addresses
> >
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> > Acked-by: Andrew Morton <akpm@linux-foundation.org>
> > ---
> > include/linux/mm_types.h | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index 2c471a2c43fa..581737bd0878 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -95,6 +95,13 @@ struct page {
> > */
> > unsigned long private;
> > };
> > + struct { /* page_pool used by netstack */
> > + /**
> > + * @dma_addr: page_pool requires a 64-bit value even on
> > + * 32-bit architectures.
> > + */
>
> Nit: might require? dma_addr_t, as you mention in the commit may have a
> different size based on CONFIG_ARCH_DMA_ADDR_T_64BIT.
So you want me to change the comment to be:
/**
* @dma_addr: might require a 64-bit value even on
* 32-bit architectures.
*/
Correctly understood?
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [net-next PATCH V2 1/3] mm: add dma_addr_t to struct page
2019-02-12 18:19 ` Jesper Dangaard Brouer
@ 2019-02-12 18:23 ` Florian Fainelli
0 siblings, 0 replies; 9+ messages in thread
From: Florian Fainelli @ 2019-02-12 18:23 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: netdev, linux-mm, Toke Høiland-Jørgensen,
Ilias Apalodimas, willy, Saeed Mahameed, Alexander Duyck,
Andrew Morton, mgorman, David S. Miller, Tariq Toukan
On 2/12/19 10:19 AM, Jesper Dangaard Brouer wrote:
> On Tue, 12 Feb 2019 10:05:39 -0800
> Florian Fainelli <f.fainelli@gmail.com> wrote:
>
>> On 2/12/19 6:49 AM, Jesper Dangaard Brouer wrote:
>>> The page_pool API is using page->private to store DMA addresses.
>>> As pointed out by David Miller we can't use that on 32-bit architectures
>>> with 64-bit DMA
>>>
>>> This patch adds a new dma_addr_t struct to allow storing DMA addresses
>>>
>>> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
>>> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
>>> Acked-by: Andrew Morton <akpm@linux-foundation.org>
>>> ---
>>> include/linux/mm_types.h | 7 +++++++
>>> 1 file changed, 7 insertions(+)
>>>
>>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>>> index 2c471a2c43fa..581737bd0878 100644
>>> --- a/include/linux/mm_types.h
>>> +++ b/include/linux/mm_types.h
>>> @@ -95,6 +95,13 @@ struct page {
>>> */
>>> unsigned long private;
>>> };
>>> + struct { /* page_pool used by netstack */
>>> + /**
>>> + * @dma_addr: page_pool requires a 64-bit value even on
>>> + * 32-bit architectures.
>>> + */
>>
>> Nit: might require? dma_addr_t, as you mention in the commit may have a
>> different size based on CONFIG_ARCH_DMA_ADDR_T_64BIT.
>
> So you want me to change the comment to be:
>
> /**
> * @dma_addr: might require a 64-bit value even on
> * 32-bit architectures.
> */
>
> Correctly understood?
Correct, that is what I would change. The commit message is correct, but
the comment makes it sound like dma_addr_t is guaranteed to be 64-bit,
while it is actually platform dependent. Does that make it clearer?
--
Florian
^ permalink raw reply [flat|nested] 9+ messages in thread
* [net-next PATCH V2 2/3] net: page_pool: don't use page->private to store dma_addr_t
2019-02-12 14:48 [net-next PATCH V2 0/3] Fix page_pool API and dma address storage Jesper Dangaard Brouer
2019-02-12 14:49 ` [net-next PATCH V2 1/3] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
@ 2019-02-12 14:49 ` Jesper Dangaard Brouer
2019-02-12 14:49 ` [net-next PATCH V2 3/3] page_pool: use DMA_ATTR_SKIP_CPU_SYNC for DMA mappings Jesper Dangaard Brouer
2 siblings, 0 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-12 14:49 UTC (permalink / raw)
To: netdev, linux-mm
Cc: Toke Høiland-Jørgensen, Ilias Apalodimas, willy,
Saeed Mahameed, Alexander Duyck, Jesper Dangaard Brouer,
Andrew Morton, mgorman, David S. Miller, Tariq Toukan
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
As pointed out by David Miller the current page_pool implementation
stores dma_addr_t in page->private.
This won't work on 32-bit platforms with 64-bit DMA addresses since the
page->private is an unsigned long and the dma_addr_t a u64.
A previous patch is adding dma_addr_t on struct page to accommodate this.
This patch adapts the page_pool related functions to use the newly added
struct for storing and retrieving DMA addresses from network drivers.
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
net/core/page_pool.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 43a932cb609b..897a69a1477e 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -136,7 +136,9 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
if (!(pool->p.flags & PP_FLAG_DMA_MAP))
goto skip_dma_map;
- /* Setup DMA mapping: use page->private for DMA-addr
+ /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
+ * since dma_addr_t can be either 32 or 64 bits and does not always fit
+ * into page private data (i.e 32bit cpu with 64bit DMA caps)
* This mapping is kept for lifetime of page, until leaving pool.
*/
dma = dma_map_page(pool->p.dev, page, 0,
@@ -146,7 +148,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
put_page(page);
return NULL;
}
- set_page_private(page, dma); /* page->private = dma; */
+ page->dma_addr = dma;
skip_dma_map:
/* When page just alloc'ed is should/must have refcnt 1. */
@@ -175,13 +177,16 @@ EXPORT_SYMBOL(page_pool_alloc_pages);
static void __page_pool_clean_page(struct page_pool *pool,
struct page *page)
{
+ dma_addr_t dma;
+
if (!(pool->p.flags & PP_FLAG_DMA_MAP))
return;
+ dma = page->dma_addr;
/* DMA unmap */
- dma_unmap_page(pool->p.dev, page_private(page),
+ dma_unmap_page(pool->p.dev, dma,
PAGE_SIZE << pool->p.order, pool->p.dma_dir);
- set_page_private(page, 0);
+ page->dma_addr = 0;
}
/* Return a page to the page allocator, cleaning up our state */
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [net-next PATCH V2 3/3] page_pool: use DMA_ATTR_SKIP_CPU_SYNC for DMA mappings
2019-02-12 14:48 [net-next PATCH V2 0/3] Fix page_pool API and dma address storage Jesper Dangaard Brouer
2019-02-12 14:49 ` [net-next PATCH V2 1/3] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
2019-02-12 14:49 ` [net-next PATCH V2 2/3] net: page_pool: don't use page->private to store dma_addr_t Jesper Dangaard Brouer
@ 2019-02-12 14:49 ` Jesper Dangaard Brouer
2019-02-12 17:12 ` kbuild test robot
2 siblings, 1 reply; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-12 14:49 UTC (permalink / raw)
To: netdev, linux-mm
Cc: Toke Høiland-Jørgensen, Ilias Apalodimas, willy,
Saeed Mahameed, Alexander Duyck, Jesper Dangaard Brouer,
Andrew Morton, mgorman, David S. Miller, Tariq Toukan
As pointed out by Alexander Duyck, the DMA mapping done in page_pool needs
to use the DMA attribute DMA_ATTR_SKIP_CPU_SYNC.
As the principle behind page_pool keeping the pages mapped is that the
driver takes over the DMA-sync steps.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
net/core/page_pool.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 897a69a1477e..7e624c2cd709 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -141,9 +141,9 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
* into page private data (i.e 32bit cpu with 64bit DMA caps)
* This mapping is kept for lifetime of page, until leaving pool.
*/
- dma = dma_map_page(pool->p.dev, page, 0,
- (PAGE_SIZE << pool->p.order),
- pool->p.dma_dir);
+ dma = dma_map_page_attrs(pool->p.dev, page, 0,
+ (PAGE_SIZE << pool->p.order),
+ pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
if (dma_mapping_error(pool->p.dev, dma)) {
put_page(page);
return NULL;
@@ -184,8 +184,9 @@ static void __page_pool_clean_page(struct page_pool *pool,
dma = page->dma_addr;
/* DMA unmap */
- dma_unmap_page(pool->p.dev, dma,
- PAGE_SIZE << pool->p.order, pool->p.dma_dir);
+ dma_unmap_page_attr(pool->p.dev, dma,
+ PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ DMA_ATTR_SKIP_CPU_SYNC);
page->dma_addr = 0;
}
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [net-next PATCH V2 3/3] page_pool: use DMA_ATTR_SKIP_CPU_SYNC for DMA mappings
2019-02-12 14:49 ` [net-next PATCH V2 3/3] page_pool: use DMA_ATTR_SKIP_CPU_SYNC for DMA mappings Jesper Dangaard Brouer
@ 2019-02-12 17:12 ` kbuild test robot
2019-02-12 17:58 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 9+ messages in thread
From: kbuild test robot @ 2019-02-12 17:12 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: kbuild-all, netdev, linux-mm, Toke Høiland-Jørgensen,
Ilias Apalodimas, willy, Saeed Mahameed, Alexander Duyck,
Jesper Dangaard Brouer, Andrew Morton, mgorman, David S. Miller,
Tariq Toukan
[-- Attachment #1: Type: text/plain, Size: 1681 bytes --]
Hi Jesper,
I love your patch! Yet something to improve:
[auto build test ERROR on net-next/master]
url: https://github.com/0day-ci/linux/commits/Jesper-Dangaard-Brouer/mm-add-dma_addr_t-to-struct-page/20190213-002150
config: ia64-allyesconfig (attached as .config)
compiler: ia64-linux-gcc (GCC) 8.2.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=8.2.0 make.cross ARCH=ia64
All errors (new ones prefixed by >>):
net/core/page_pool.c: In function '__page_pool_clean_page':
>> net/core/page_pool.c:187:2: error: implicit declaration of function 'dma_unmap_page_attr'; did you mean 'dma_unmap_page_attrs'? [-Werror=implicit-function-declaration]
dma_unmap_page_attr(pool->p.dev, dma,
^~~~~~~~~~~~~~~~~~~
dma_unmap_page_attrs
cc1: some warnings being treated as errors
vim +187 net/core/page_pool.c
175
176 /* Cleanup page_pool state from page */
177 static void __page_pool_clean_page(struct page_pool *pool,
178 struct page *page)
179 {
180 dma_addr_t dma;
181
182 if (!(pool->p.flags & PP_FLAG_DMA_MAP))
183 return;
184
185 dma = page->dma_addr;
186 /* DMA unmap */
> 187 dma_unmap_page_attr(pool->p.dev, dma,
188 PAGE_SIZE << pool->p.order, pool->p.dma_dir,
189 DMA_ATTR_SKIP_CPU_SYNC);
190 page->dma_addr = 0;
191 }
192
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 56263 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [net-next PATCH V2 3/3] page_pool: use DMA_ATTR_SKIP_CPU_SYNC for DMA mappings
2019-02-12 17:12 ` kbuild test robot
@ 2019-02-12 17:58 ` Jesper Dangaard Brouer
0 siblings, 0 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-02-12 17:58 UTC (permalink / raw)
To: kbuild test robot
Cc: kbuild-all, netdev, linux-mm, Toke Høiland-Jørgensen,
Ilias Apalodimas, willy, Saeed Mahameed, Alexander Duyck,
Andrew Morton, mgorman, David S. Miller, Tariq Toukan, brouer
On Wed, 13 Feb 2019 01:12:59 +0800
kbuild test robot <lkp@intel.com> wrote:
> net/core/page_pool.c: In function '__page_pool_clean_page':
> >> net/core/page_pool.c:187:2: error: implicit declaration of function 'dma_unmap_page_attr'; did you mean 'dma_unmap_page_attrs'? [-Werror=implicit-function-declaration]
> dma_unmap_page_attr(pool->p.dev, dma,
> ^~~~~~~~~~~~~~~~~~~
> dma_unmap_page_attrs
> cc1: some warnings being treated as errors
Ups, in my compile test I didn't have CONFIG_PAGE_POOL defined.
Will respin a V3.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 9+ messages in thread