From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Mel Gorman <mgorman@techsingularity.net>, linux-mm@kvack.org
Cc: Jesper Dangaard Brouer <brouer@redhat.com>,
chuck.lever@oracle.com, netdev@vger.kernel.org,
linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH RFC net-next 1/3] net: page_pool: refactor dma_map into own function page_pool_dma_map
Date: Wed, 24 Feb 2021 19:56:41 +0100 [thread overview]
Message-ID: <161419300107.2718959.18302883670835746249.stgit@firesoul> (raw)
In-Reply-To: <161419296941.2718959.12575257358107256094.stgit@firesoul>
In preparation for next patch, move the dma mapping into its own
function, as this will make it easier to follow the changes.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
net/core/page_pool.c | 49 +++++++++++++++++++++++++++++--------------------
1 file changed, 29 insertions(+), 20 deletions(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ad8b0707af04..50d52aa6fbeb 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -180,6 +180,31 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
pool->p.dma_dir);
}
+static struct page * page_pool_dma_map(struct page_pool *pool,
+ struct page *page)
+{
+ dma_addr_t dma;
+
+ /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
+ * since dma_addr_t can be either 32 or 64 bits and does not always fit
+ * into page private data (i.e 32bit cpu with 64bit DMA caps)
+ * This mapping is kept for lifetime of page, until leaving pool.
+ */
+ dma = dma_map_page_attrs(pool->p.dev, page, 0,
+ (PAGE_SIZE << pool->p.order),
+ pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
+ if (dma_mapping_error(pool->p.dev, dma)) {
+ put_page(page);
+ return NULL;
+ }
+ page->dma_addr = dma;
+
+ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
+ page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
+
+ return page;
+}
+
/* slow path */
noinline
static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
@@ -187,7 +212,6 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
{
struct page *page;
gfp_t gfp = _gfp;
- dma_addr_t dma;
/* We could always set __GFP_COMP, and avoid this branch, as
* prep_new_page() can handle order-0 with __GFP_COMP.
@@ -211,27 +235,12 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
if (!page)
return NULL;
- if (!(pool->p.flags & PP_FLAG_DMA_MAP))
- goto skip_dma_map;
-
- /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
- * since dma_addr_t can be either 32 or 64 bits and does not always fit
- * into page private data (i.e 32bit cpu with 64bit DMA caps)
- * This mapping is kept for lifetime of page, until leaving pool.
- */
- dma = dma_map_page_attrs(pool->p.dev, page, 0,
- (PAGE_SIZE << pool->p.order),
- pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
- if (dma_mapping_error(pool->p.dev, dma)) {
- put_page(page);
- return NULL;
+ if (pool->p.flags & PP_FLAG_DMA_MAP) {
+ page = page_pool_dma_map(pool, page);
+ if (!page)
+ return NULL;
}
- page->dma_addr = dma;
-
- if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
- page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
-skip_dma_map:
/* Track how many pages are held 'in-flight' */
pool->pages_state_hold_cnt++;
next prev parent reply other threads:[~2021-02-24 19:00 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-24 10:26 [RFC PATCH 0/3] Introduce a bulk order-0 page allocator for sunrpc Mel Gorman
2021-02-24 10:26 ` [PATCH 1/3] SUNRPC: Set rq_page_end differently Mel Gorman
2021-02-24 10:26 ` [PATCH 2/3] mm, page_alloc: Add a bulk page allocator Mel Gorman
2021-02-24 10:26 ` [PATCH 3/3] SUNRPC: Refresh rq_pages using " Mel Gorman
2021-02-24 11:27 ` [RFC PATCH 0/3] Introduce a bulk order-0 page allocator for sunrpc Jesper Dangaard Brouer
2021-02-24 11:55 ` Mel Gorman
2021-02-24 13:20 ` Chuck Lever
2021-02-24 18:56 ` [PATCH RFC net-next 0/3] Use bulk order-0 page allocator API for page_pool Jesper Dangaard Brouer
2021-02-24 18:56 ` Jesper Dangaard Brouer [this message]
2021-02-24 20:11 ` [PATCH RFC net-next 1/3] net: page_pool: refactor dma_map into own function page_pool_dma_map Ilias Apalodimas
2021-02-24 18:56 ` [PATCH RFC net-next 2/3] net: page_pool: use alloc_pages_bulk in refill code path Jesper Dangaard Brouer
2021-02-24 20:15 ` Ilias Apalodimas
2021-02-26 14:31 ` Jesper Dangaard Brouer
2021-02-24 18:56 ` [PATCH RFC net-next 3/3] mm: make zone->free_area[order] access faster Jesper Dangaard Brouer
2021-02-25 11:28 ` Mel Gorman
2021-02-25 15:16 ` Jesper Dangaard Brouer
2021-02-25 15:38 ` Mel Gorman
2021-02-26 14:34 ` Jesper Dangaard Brouer
2021-03-01 13:29 ` [PATCH RFC V2 net-next 0/2] Use bulk order-0 page allocator API for page_pool Jesper Dangaard Brouer
2021-03-01 13:29 ` [PATCH RFC V2 net-next 1/2] net: page_pool: refactor dma_map into own function page_pool_dma_map Jesper Dangaard Brouer
2021-03-01 13:29 ` [PATCH RFC V2 net-next 2/2] net: page_pool: use alloc_pages_bulk in refill code path Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=161419300107.2718959.18302883670835746249.stgit@firesoul \
--to=brouer@redhat.com \
--cc=chuck.lever@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=mgorman@techsingularity.net \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).