linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] ixgbe: sync the first fragment unconditionally
@ 2019-08-06  9:29 Firo Yang
  2019-08-06 18:36 ` David Miller
  2019-08-06 20:00 ` Alexander Duyck
  0 siblings, 2 replies; 4+ messages in thread
From: Firo Yang @ 2019-08-06  9:29 UTC (permalink / raw)
  To: netdev; +Cc: jeffrey.t.kirsher, davem, intel-wired-lan, linux-kernel, Firo Yang

In Xen environment, if Xen-swiotlb is enabled, ixgbe driver
could possibly allocate a page, DMA memory buffer, for the first
fragment which is not suitable for Xen-swiotlb to do DMA operations.
Xen-swiotlb will internally allocate another page for doing DMA
operations. It requires syncing between those two pages. Otherwise,
we may get an incomplete skb. To fix this problem, sync the first
fragment no matter the first fargment is makred as "page_released"
or not.

Signed-off-by: Firo Yang <firo.yang@suse.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index cbaf712d6529..200de9838096 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1825,13 +1825,7 @@ static void ixgbe_pull_tail(struct ixgbe_ring *rx_ring,
 static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
 				struct sk_buff *skb)
 {
-	/* if the page was released unmap it, else just sync our portion */
-	if (unlikely(IXGBE_CB(skb)->page_released)) {
-		dma_unmap_page_attrs(rx_ring->dev, IXGBE_CB(skb)->dma,
-				     ixgbe_rx_pg_size(rx_ring),
-				     DMA_FROM_DEVICE,
-				     IXGBE_RX_DMA_ATTR);
-	} else if (ring_uses_build_skb(rx_ring)) {
+	if (ring_uses_build_skb(rx_ring)) {
 		unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
 
 		dma_sync_single_range_for_cpu(rx_ring->dev,
@@ -1848,6 +1842,14 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
 					      skb_frag_size(frag),
 					      DMA_FROM_DEVICE);
 	}
+
+	/* If the page was released, just unmap it. */
+	if (unlikely(IXGBE_CB(skb)->page_released)) {
+		dma_unmap_page_attrs(rx_ring->dev, IXGBE_CB(skb)->dma,
+				     ixgbe_rx_pg_size(rx_ring),
+				     DMA_FROM_DEVICE,
+				     IXGBE_RX_DMA_ATTR);
+	}
 }
 
 /**
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/1] ixgbe: sync the first fragment unconditionally
  2019-08-06  9:29 [PATCH 1/1] ixgbe: sync the first fragment unconditionally Firo Yang
@ 2019-08-06 18:36 ` David Miller
  2019-08-06 19:52   ` [Intel-wired-lan] " Alexander Duyck
  2019-08-06 20:00 ` Alexander Duyck
  1 sibling, 1 reply; 4+ messages in thread
From: David Miller @ 2019-08-06 18:36 UTC (permalink / raw)
  To: firo.yang; +Cc: netdev, jeffrey.t.kirsher, intel-wired-lan, linux-kernel

From: Firo Yang <firo.yang@suse.com>
Date: Tue, 6 Aug 2019 09:29:51 +0000

> In Xen environment, if Xen-swiotlb is enabled, ixgbe driver
> could possibly allocate a page, DMA memory buffer, for the first
> fragment which is not suitable for Xen-swiotlb to do DMA operations.
> Xen-swiotlb will internally allocate another page for doing DMA
> operations. It requires syncing between those two pages. Otherwise,
> we may get an incomplete skb. To fix this problem, sync the first
> fragment no matter the first fargment is makred as "page_released"
> or not.
> 
> Signed-off-by: Firo Yang <firo.yang@suse.com>

I don't understand, an unmap operation implies a sync operation.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Intel-wired-lan] [PATCH 1/1] ixgbe: sync the first fragment unconditionally
  2019-08-06 18:36 ` David Miller
@ 2019-08-06 19:52   ` Alexander Duyck
  0 siblings, 0 replies; 4+ messages in thread
From: Alexander Duyck @ 2019-08-06 19:52 UTC (permalink / raw)
  To: David Miller; +Cc: firo.yang, Netdev, intel-wired-lan, LKML

On Tue, Aug 6, 2019 at 11:36 AM David Miller <davem@davemloft.net> wrote:
>
> From: Firo Yang <firo.yang@suse.com>
> Date: Tue, 6 Aug 2019 09:29:51 +0000
>
> > In Xen environment, if Xen-swiotlb is enabled, ixgbe driver
> > could possibly allocate a page, DMA memory buffer, for the first
> > fragment which is not suitable for Xen-swiotlb to do DMA operations.
> > Xen-swiotlb will internally allocate another page for doing DMA
> > operations. It requires syncing between those two pages. Otherwise,
> > we may get an incomplete skb. To fix this problem, sync the first
> > fragment no matter the first fargment is makred as "page_released"
> > or not.
> >
> > Signed-off-by: Firo Yang <firo.yang@suse.com>
>
> I don't understand, an unmap operation implies a sync operation.

Actually it doesn't because ixgbe is mapping and unmapping with
DMA_ATTR_SKIP_CPU_SYNC.

The patch description isn't very good. The issue is that the sync in
this case is being skipped in ixgbe_get_rx_buffer for a frame where
the buffer spans more then a single page. As such we need to do both
the sync and the unmap call on the last frame when we encounter the
End Of Packet.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/1] ixgbe: sync the first fragment unconditionally
  2019-08-06  9:29 [PATCH 1/1] ixgbe: sync the first fragment unconditionally Firo Yang
  2019-08-06 18:36 ` David Miller
@ 2019-08-06 20:00 ` Alexander Duyck
  1 sibling, 0 replies; 4+ messages in thread
From: Alexander Duyck @ 2019-08-06 20:00 UTC (permalink / raw)
  To: Firo Yang; +Cc: netdev, jeffrey.t.kirsher, davem, intel-wired-lan, linux-kernel

On Tue, Aug 6, 2019 at 2:38 AM Firo Yang <firo.yang@suse.com> wrote:
>
> In Xen environment, if Xen-swiotlb is enabled, ixgbe driver
> could possibly allocate a page, DMA memory buffer, for the first
> fragment which is not suitable for Xen-swiotlb to do DMA operations.
> Xen-swiotlb will internally allocate another page for doing DMA
> operations. It requires syncing between those two pages. Otherwise,
> we may get an incomplete skb. To fix this problem, sync the first
> fragment no matter the first fargment is makred as "page_released"
> or not.
>
> Signed-off-by: Firo Yang <firo.yang@suse.com>

From what I can tell the code below is fine. However the patch
description isn't very clear about the issue.

Specifically since we are mapping the frame with
DMA_ATTR_SKIP_CPU_SYNC we have to unmap with that as well. As a result
a sync is not performed on an unmap and must be done manually as we
skipped it for the first frag. As such we need to always sync before
possibly performing a page unmap operation.

Also you can probably add the following to your patch description"
Fixes: f3213d932173 ("ixgbe: Update driver to make use of DMA
attributes in Rx path")
Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>

> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> index cbaf712d6529..200de9838096 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> @@ -1825,13 +1825,7 @@ static void ixgbe_pull_tail(struct ixgbe_ring *rx_ring,
>  static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
>                                 struct sk_buff *skb)
>  {
> -       /* if the page was released unmap it, else just sync our portion */
> -       if (unlikely(IXGBE_CB(skb)->page_released)) {
> -               dma_unmap_page_attrs(rx_ring->dev, IXGBE_CB(skb)->dma,
> -                                    ixgbe_rx_pg_size(rx_ring),
> -                                    DMA_FROM_DEVICE,
> -                                    IXGBE_RX_DMA_ATTR);
> -       } else if (ring_uses_build_skb(rx_ring)) {
> +       if (ring_uses_build_skb(rx_ring)) {
>                 unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
>
>                 dma_sync_single_range_for_cpu(rx_ring->dev,
> @@ -1848,6 +1842,14 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
>                                               skb_frag_size(frag),
>                                               DMA_FROM_DEVICE);
>         }
> +
> +       /* If the page was released, just unmap it. */
> +       if (unlikely(IXGBE_CB(skb)->page_released)) {
> +               dma_unmap_page_attrs(rx_ring->dev, IXGBE_CB(skb)->dma,
> +                                    ixgbe_rx_pg_size(rx_ring),
> +                                    DMA_FROM_DEVICE,
> +                                    IXGBE_RX_DMA_ATTR);
> +       }
>  }
>
>  /**
> --
> 2.16.4
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-08-06 20:00 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-06  9:29 [PATCH 1/1] ixgbe: sync the first fragment unconditionally Firo Yang
2019-08-06 18:36 ` David Miller
2019-08-06 19:52   ` [Intel-wired-lan] " Alexander Duyck
2019-08-06 20:00 ` Alexander Duyck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).