From: Jesper Dangaard Brouer <brouer@redhat.com>
To: netdev@vger.kernel.org, linux-mm@kvack.org
Cc: "Toke Høiland-Jørgensen" <toke@toke.dk>,
"Ilias Apalodimas" <ilias.apalodimas@linaro.org>,
willy@infradead.org, "Saeed Mahameed" <saeedm@mellanox.com>,
"Alexander Duyck" <alexander.duyck@gmail.com>,
"Jesper Dangaard Brouer" <brouer@redhat.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
mgorman@techsingularity.net,
"David S. Miller" <davem@davemloft.net>,
"Tariq Toukan" <tariqt@mellanox.com>
Subject: [net-next PATCH V3 3/3] page_pool: use DMA_ATTR_SKIP_CPU_SYNC for DMA mappings
Date: Wed, 13 Feb 2019 02:55:50 +0100 [thread overview]
Message-ID: <155002295022.5597.14139756432375272348.stgit@firesoul> (raw)
In-Reply-To: <155002290134.5597.6544755780651689517.stgit@firesoul>
As pointed out by Alexander Duyck, the DMA mapping done in page_pool needs
to use the DMA attribute DMA_ATTR_SKIP_CPU_SYNC.
As the principle behind page_pool keeping the pages mapped is that the
driver takes over the DMA-sync steps.
Reported-by: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
net/core/page_pool.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 897a69a1477e..5b2252c6d49b 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -141,9 +141,9 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
* into page private data (i.e 32bit cpu with 64bit DMA caps)
* This mapping is kept for lifetime of page, until leaving pool.
*/
- dma = dma_map_page(pool->p.dev, page, 0,
- (PAGE_SIZE << pool->p.order),
- pool->p.dma_dir);
+ dma = dma_map_page_attrs(pool->p.dev, page, 0,
+ (PAGE_SIZE << pool->p.order),
+ pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
if (dma_mapping_error(pool->p.dev, dma)) {
put_page(page);
return NULL;
@@ -184,8 +184,9 @@ static void __page_pool_clean_page(struct page_pool *pool,
dma = page->dma_addr;
/* DMA unmap */
- dma_unmap_page(pool->p.dev, dma,
- PAGE_SIZE << pool->p.order, pool->p.dma_dir);
+ dma_unmap_page_attrs(pool->p.dev, dma,
+ PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ DMA_ATTR_SKIP_CPU_SYNC);
page->dma_addr = 0;
}
prev parent reply other threads:[~2019-02-13 1:55 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-13 1:55 [net-next PATCH V3 0/3] Fix page_pool API and dma address storage Jesper Dangaard Brouer
2019-02-13 1:55 ` [net-next PATCH V3 1/3] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
2020-04-29 0:38 ` Kirill A. Shutemov
2020-04-29 1:54 ` Matthew Wilcox
2019-02-13 1:55 ` [net-next PATCH V3 2/3] net: page_pool: don't use page->private to store dma_addr_t Jesper Dangaard Brouer
2019-02-13 1:55 ` Jesper Dangaard Brouer [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=155002295022.5597.14139756432375272348.stgit@firesoul \
--to=brouer@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.duyck@gmail.com \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
--cc=tariqt@mellanox.com \
--cc=toke@toke.dk \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).