* + net-page_pool-refactor-dma_map-into-own-function-page_pool_dma_map.patch added to -mm tree
@ 2021-03-28 22:30 akpm
0 siblings, 0 replies; only message in thread
From: akpm @ 2021-03-28 22:30 UTC (permalink / raw)
To: alexander.duyck, alobakin, brouer, chuck.lever, davem, hch,
ilias.apalodimas, mgorman, mm-commits, vbabka, willy
The patch titled
Subject: net: page_pool: refactor dma_map into own function page_pool_dma_map
has been added to the -mm tree. Its filename is
net-page_pool-refactor-dma_map-into-own-function-page_pool_dma_map.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/net-page_pool-refactor-dma_map-into-own-function-page_pool_dma_map.patch
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/net-page_pool-refactor-dma_map-into-own-function-page_pool_dma_map.patch
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Jesper Dangaard Brouer <brouer@redhat.com>
Subject: net: page_pool: refactor dma_map into own function page_pool_dma_map
In preparation for next patch, move the dma mapping into its own function,
as this will make it easier to follow the changes.
[ilias.apalodimas: make page_pool_dma_map return boolean]
Link: https://lkml.kernel.org/r/20210325114228.27719-9-mgorman@techsingularity.net
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Reviewed-by: Alexander Lobakin <alobakin@pm.me>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: David Miller <davem@davemloft.net>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
net/core/page_pool.c | 45 +++++++++++++++++++++++------------------
1 file changed, 26 insertions(+), 19 deletions(-)
--- a/net/core/page_pool.c~net-page_pool-refactor-dma_map-into-own-function-page_pool_dma_map
+++ a/net/core/page_pool.c
@@ -180,14 +180,37 @@ static void page_pool_dma_sync_for_devic
pool->p.dma_dir);
}
+static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
+{
+ dma_addr_t dma;
+
+ /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
+ * since dma_addr_t can be either 32 or 64 bits and does not always fit
+ * into page private data (i.e 32bit cpu with 64bit DMA caps)
+ * This mapping is kept for lifetime of page, until leaving pool.
+ */
+ dma = dma_map_page_attrs(pool->p.dev, page, 0,
+ (PAGE_SIZE << pool->p.order),
+ pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
+ if (dma_mapping_error(pool->p.dev, dma))
+ return false;
+
+ page->dma_addr = dma;
+
+ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
+ page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
+
+ return true;
+}
+
/* slow path */
noinline
static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
gfp_t _gfp)
{
+ unsigned int pp_flags = pool->p.flags;
struct page *page;
gfp_t gfp = _gfp;
- dma_addr_t dma;
/* We could always set __GFP_COMP, and avoid this branch, as
* prep_new_page() can handle order-0 with __GFP_COMP.
@@ -211,30 +234,14 @@ static struct page *__page_pool_alloc_pa
if (!page)
return NULL;
- if (!(pool->p.flags & PP_FLAG_DMA_MAP))
- goto skip_dma_map;
-
- /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
- * since dma_addr_t can be either 32 or 64 bits and does not always fit
- * into page private data (i.e 32bit cpu with 64bit DMA caps)
- * This mapping is kept for lifetime of page, until leaving pool.
- */
- dma = dma_map_page_attrs(pool->p.dev, page, 0,
- (PAGE_SIZE << pool->p.order),
- pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
- if (dma_mapping_error(pool->p.dev, dma)) {
+ if ((pp_flags & PP_FLAG_DMA_MAP) &&
+ unlikely(!page_pool_dma_map(pool, page))) {
put_page(page);
return NULL;
}
- page->dma_addr = dma;
- if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
- page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
-
-skip_dma_map:
/* Track how many pages are held 'in-flight' */
pool->pages_state_hold_cnt++;
-
trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
/* When page just alloc'ed is should/must have refcnt 1. */
_
Patches currently in -mm which might be from brouer@redhat.com are
mm-page_alloc-optimize-code-layout-for-__alloc_pages_bulk.patch
mm-page_alloc-inline-__rmqueue_pcplist.patch
net-page_pool-refactor-dma_map-into-own-function-page_pool_dma_map.patch
net-page_pool-use-alloc_pages_bulk-in-refill-code-path.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2021-03-28 22:31 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-28 22:30 + net-page_pool-refactor-dma_map-into-own-function-page_pool_dma_map.patch added to -mm tree akpm
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.