* [PATCH net,v2] dpaa_eth: fix DMA mapping leak
@ 2019-12-23 7:39 Madalin Bucur
2019-12-23 14:48 ` Leon Romanovsky
2019-12-26 23:14 ` David Miller
0 siblings, 2 replies; 3+ messages in thread
From: Madalin Bucur @ 2019-12-23 7:39 UTC (permalink / raw)
To: davem, netdev; +Cc: leon, Madalin Bucur
On the error path some fragments remain DMA mapped. Adding a fix
that unmaps all the fragments. Rework cleanup path to be simpler.
Fixes: 8151ee88bad5 ("dpaa_eth: use page backed rx buffers")
Signed-off-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
---
Changes from v1: used Dave's suggestion to simplify cleanup path
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 39 +++++++++++++-------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 6a9d12dad5d9..a301f0095223 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -1719,7 +1719,7 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
int page_offset;
unsigned int sz;
int *count_ptr;
- int i;
+ int i, j;
vaddr = phys_to_virt(addr);
WARN_ON(!IS_ALIGNED((unsigned long)vaddr, SMP_CACHE_BYTES));
@@ -1736,14 +1736,14 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr,
SMP_CACHE_BYTES));
+ dma_unmap_page(priv->rx_dma_dev, sg_addr,
+ DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE);
+
/* We may use multiple Rx pools */
dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
if (!dpaa_bp)
goto free_buffers;
- count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
- dma_unmap_page(priv->rx_dma_dev, sg_addr,
- DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE);
if (!skb) {
sz = dpaa_bp->size +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
@@ -1786,7 +1786,9 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
skb_add_rx_frag(skb, i - 1, head_page, frag_off,
frag_len, dpaa_bp->size);
}
+
/* Update the pool count for the current {cpu x bpool} */
+ count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
(*count_ptr)--;
if (qm_sg_entry_is_final(&sgt[i]))
@@ -1800,26 +1802,25 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
return skb;
free_buffers:
- /* compensate sw bpool counter changes */
- for (i--; i >= 0; i--) {
- dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
- if (dpaa_bp) {
- count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
- (*count_ptr)++;
- }
- }
/* free all the SG entries */
- for (i = 0; i < DPAA_SGT_MAX_ENTRIES ; i++) {
- sg_addr = qm_sg_addr(&sgt[i]);
+ for (j = 0; j < DPAA_SGT_MAX_ENTRIES ; j++) {
+ sg_addr = qm_sg_addr(&sgt[j]);
sg_vaddr = phys_to_virt(sg_addr);
+ /* all pages 0..i were unmaped */
+ if (j > i)
+ dma_unmap_page(priv->rx_dma_dev, qm_sg_addr(&sgt[j]),
+ DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE);
free_pages((unsigned long)sg_vaddr, 0);
- dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
- if (dpaa_bp) {
- count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
- (*count_ptr)--;
+ /* counters 0..i-1 were decremented */
+ if (j >= i) {
+ dpaa_bp = dpaa_bpid2pool(sgt[j].bpid);
+ if (dpaa_bp) {
+ count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
+ (*count_ptr)--;
+ }
}
- if (qm_sg_entry_is_final(&sgt[i]))
+ if (qm_sg_entry_is_final(&sgt[j]))
break;
}
/* free the SGT fragment */
--
2.1.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH net,v2] dpaa_eth: fix DMA mapping leak
2019-12-23 7:39 [PATCH net,v2] dpaa_eth: fix DMA mapping leak Madalin Bucur
@ 2019-12-23 14:48 ` Leon Romanovsky
2019-12-26 23:14 ` David Miller
1 sibling, 0 replies; 3+ messages in thread
From: Leon Romanovsky @ 2019-12-23 14:48 UTC (permalink / raw)
To: Madalin Bucur; +Cc: davem, netdev
On Mon, Dec 23, 2019 at 09:39:22AM +0200, Madalin Bucur wrote:
> On the error path some fragments remain DMA mapped. Adding a fix
> that unmaps all the fragments. Rework cleanup path to be simpler.
>
> Fixes: 8151ee88bad5 ("dpaa_eth: use page backed rx buffers")
Thanks
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH net,v2] dpaa_eth: fix DMA mapping leak
2019-12-23 7:39 [PATCH net,v2] dpaa_eth: fix DMA mapping leak Madalin Bucur
2019-12-23 14:48 ` Leon Romanovsky
@ 2019-12-26 23:14 ` David Miller
1 sibling, 0 replies; 3+ messages in thread
From: David Miller @ 2019-12-26 23:14 UTC (permalink / raw)
To: madalin.bucur; +Cc: netdev, leon
From: Madalin Bucur <madalin.bucur@oss.nxp.com>
Date: Mon, 23 Dec 2019 09:39:22 +0200
> On the error path some fragments remain DMA mapped. Adding a fix
> that unmaps all the fragments. Rework cleanup path to be simpler.
>
> Fixes: 8151ee88bad5 ("dpaa_eth: use page backed rx buffers")
> Signed-off-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
> ---
>
> Changes from v1: used Dave's suggestion to simplify cleanup path
Applied, thanks.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2019-12-26 23:14 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-23 7:39 [PATCH net,v2] dpaa_eth: fix DMA mapping leak Madalin Bucur
2019-12-23 14:48 ` Leon Romanovsky
2019-12-26 23:14 ` David Miller
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.