From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E67FC07E9D for ; Mon, 26 Sep 2022 11:30:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235374AbiIZL37 (ORCPT ); Mon, 26 Sep 2022 07:29:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238085AbiIZL3C (ORCPT ); Mon, 26 Sep 2022 07:29:02 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63D066B8D7; Mon, 26 Sep 2022 03:41:36 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 28955B80930; Mon, 26 Sep 2022 10:32:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75FBDC433C1; Mon, 26 Sep 2022 10:32:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664188325; bh=HuiIgQggbdGWw/dmxjWvslyAIgVeUHvyyjtYOiJ4BeY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Wb33YQdeF4AmhyftmALbRaNCmv/A5LsSi5EoMZT2SV9grOhsbIAQqmtH0ZsUOQX6K mikgoKJ2iitlRzMvCnLgqwGaHgF9cTksv4lyS0kshNnZAgSW3JDa0vamDTPnBbKhpf YQKiVG8ssof6IMzzCuiLkHlJoYOxXQfsiaccdk1I= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sean Anderson , Andrew Lunn , Jakub Kicinski , Sasha Levin Subject: [PATCH 5.10 113/141] net: sunhme: Fix packet reception for len < RX_COPY_THRESHOLD Date: Mon, 26 Sep 2022 12:12:19 +0200 Message-Id: <20220926100758.540545785@linuxfoundation.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220926100754.639112000@linuxfoundation.org> References: <20220926100754.639112000@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Anderson [ Upstream commit 878e2405710aacfeeb19364c300f38b7a9abfe8f ] There is a separate receive path for small packets (under 256 bytes). Instead of allocating a new dma-capable skb to be used for the next packet, this path allocates a skb and copies the data into it (reusing the existing sbk for the next packet). There are two bytes of junk data at the beginning of every packet. I believe these are inserted in order to allow aligned DMA and IP headers. We skip over them using skb_reserve. Before copying over the data, we must use a barrier to ensure we see the whole packet. The current code only synchronizes len bytes, starting from the beginning of the packet, including the junk bytes. However, this leaves off the final two bytes in the packet. Synchronize the whole packet. To reproduce this problem, ping a HME with a payload size between 17 and 214 $ ping -s 17 which will complain rather loudly about the data mismatch. Small packets (below 60 bytes on the wire) do not have this issue. I suspect this is related to the padding added to increase the minimum packet size. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Sean Anderson Reviewed-by: Andrew Lunn Link: https://lore.kernel.org/r/20220920235018.1675956-1-seanga2@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/sun/sunhme.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c index 69fc47089e62..940db4ec5714 100644 --- a/drivers/net/ethernet/sun/sunhme.c +++ b/drivers/net/ethernet/sun/sunhme.c @@ -2063,9 +2063,9 @@ static void happy_meal_rx(struct happy_meal *hp, struct net_device *dev) skb_reserve(copy_skb, 2); skb_put(copy_skb, len); - dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE); + dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE); skb_copy_from_linear_data(skb, copy_skb->data, len); - dma_sync_single_for_device(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE); + dma_sync_single_for_device(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE); /* Reuse original ring buffer. */ hme_write_rxd(hp, this, (RXFLAG_OWN|((RX_BUF_ALLOC_SIZE-RX_OFFSET)<<16)), -- 2.35.1