From: Jonathan Lemon <jonathan.lemon@gmail.com>
To: <brouer@redhat.com>, <ilias.apalodimas@linaro.org>,
<saeedm@mellanox.com>, <tariqt@mellanox.com>
Cc: <netdev@vger.kernel.org>, <kernel-team@fb.com>
Subject: [PATCH 03/10 net-next] net/mlx5e: RX, Internal DMA mapping in page_pool
Date: Wed, 16 Oct 2019 15:50:21 -0700 [thread overview]
Message-ID: <20191016225028.2100206-4-jonathan.lemon@gmail.com> (raw)
In-Reply-To: <20191016225028.2100206-1-jonathan.lemon@gmail.com>
From: Tariq Toukan <tariqt@mellanox.com>
After RX page-cache is removed in previous patch, let the
page_pool be responsible for the DMA mapping.
Issue: 1487631
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 2 --
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 3 +--
.../net/ethernet/mellanox/mlx5/core/en_main.c | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 16 +---------------
4 files changed, 3 insertions(+), 20 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index a1ab5c76177d..2e281c755b65 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -926,10 +926,8 @@ bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev);
bool mlx5e_striding_rq_possible(struct mlx5_core_dev *mdev,
struct mlx5e_params *params);
-void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info);
void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
struct mlx5e_dma_info *dma_info);
-void mlx5e_page_release(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info);
void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index 1b26061cb959..8376b2789575 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -161,8 +161,7 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
goto xdp_abort;
__set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags);
__set_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags);
- if (!xsk)
- mlx5e_page_dma_unmap(rq, di);
+ /* xdp maps call xdp_release_frame() if needed */
rq->stats->xdp_redirect++;
return true;
default:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 168be1f800a3..2b828de1adf0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -546,7 +546,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
} else {
/* Create a page_pool and register it with rxq */
pp_params.order = 0;
- pp_params.flags = 0; /* No-internal DMA mapping in page_pool */
+ pp_params.flags = PP_FLAG_DMA_MAP;
pp_params.pool_size = pool_size;
pp_params.nid = cpu_to_node(c->cpu);
pp_params.dev = c->pdev;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 033b8264a4e4..1b74d03fdf06 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -190,14 +190,7 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq,
dma_info->page = page_pool_dev_alloc_pages(rq->page_pool);
if (unlikely(!dma_info->page))
return -ENOMEM;
-
- dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0,
- PAGE_SIZE, rq->buff.map_dir);
- if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
- page_pool_recycle_direct(rq->page_pool, dma_info->page);
- dma_info->page = NULL;
- return -ENOMEM;
- }
+ dma_info->addr = page_pool_get_dma_addr(dma_info->page);
return 0;
}
@@ -211,16 +204,9 @@ static inline int mlx5e_page_alloc(struct mlx5e_rq *rq,
return mlx5e_page_alloc_pool(rq, dma_info);
}
-void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info)
-{
- dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir);
-}
-
void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
struct mlx5e_dma_info *dma_info)
{
- mlx5e_page_dma_unmap(rq, dma_info);
-
page_pool_recycle_direct(rq->page_pool, dma_info->page);
}
--
2.17.1
next prev parent reply other threads:[~2019-10-16 22:51 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-16 22:50 [PATCH 00/10 net-next] page_pool cleanups Jonathan Lemon
2019-10-16 22:50 ` [PATCH 01/10 net-next] net/mlx5e: RX, Remove RX page-cache Jonathan Lemon
2019-10-17 1:30 ` Jakub Kicinski
2019-10-18 20:03 ` Jonathan Lemon
2019-10-18 20:53 ` Saeed Mahameed
2019-10-19 0:10 ` Jonathan Lemon
2019-10-20 7:29 ` Tariq Toukan
2019-10-16 22:50 ` [PATCH 02/10 net-next] net/mlx5e: RX, Manage RX pages only via page pool API Jonathan Lemon
2019-10-16 22:50 ` Jonathan Lemon [this message]
2019-10-17 1:33 ` [PATCH 03/10 net-next] net/mlx5e: RX, Internal DMA mapping in page_pool Jakub Kicinski
2019-10-18 20:04 ` Jonathan Lemon
2019-10-18 21:01 ` Saeed Mahameed
2019-10-16 22:50 ` [PATCH 04/10 net-next] page_pool: Add API to update numa node and flush page caches Jonathan Lemon
2019-10-17 12:06 ` Ilias Apalodimas
2019-10-18 21:07 ` Saeed Mahameed
2019-10-18 23:38 ` Jonathan Lemon
2019-10-16 22:50 ` [PATCH 05/10 net-next] net/mlx5e: Rx, Update page pool numa node when changed Jonathan Lemon
2019-10-16 22:50 ` [PATCH 06/10 net-next] page_pool: Add page_pool_keep_page Jonathan Lemon
2019-10-16 22:50 ` [PATCH 07/10 net-next] page_pool: allow configurable linear cache size Jonathan Lemon
2019-10-17 8:51 ` Jesper Dangaard Brouer
2019-10-18 20:31 ` Jonathan Lemon
2019-10-16 22:50 ` [PATCH 08/10 net-next] page_pool: Add statistics Jonathan Lemon
2019-10-17 8:29 ` Jesper Dangaard Brouer
2019-10-18 21:29 ` Saeed Mahameed
2019-10-18 23:37 ` Jonathan Lemon
2019-10-16 22:50 ` [PATCH 09/10 net-next] net/mlx5: Add page_pool stats to the Mellanox driver Jonathan Lemon
2019-10-17 11:09 ` Jesper Dangaard Brouer
2019-10-18 20:45 ` Jonathan Lemon
2019-10-16 22:50 ` [PATCH 10/10 net-next] page_pool: Cleanup and rename page_pool functions Jonathan Lemon
2019-10-18 20:50 ` [PATCH 00/10 net-next] page_pool cleanups Saeed Mahameed
2019-10-18 21:21 ` Saeed Mahameed
2019-10-18 23:32 ` Jonathan Lemon
2019-10-21 19:08 ` Saeed Mahameed
2019-10-21 21:45 ` Jonathan Lemon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191016225028.2100206-4-jonathan.lemon@gmail.com \
--to=jonathan.lemon@gmail.com \
--cc=brouer@redhat.com \
--cc=ilias.apalodimas@linaro.org \
--cc=kernel-team@fb.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
--cc=tariqt@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).