From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63E83C33CA2 for ; Fri, 10 Jan 2020 11:54:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3F1A22072A for ; Fri, 10 Jan 2020 11:54:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728062AbgAJLy3 (ORCPT ); Fri, 10 Jan 2020 06:54:29 -0500 Received: from foss.arm.com ([217.140.110.172]:43110 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728029AbgAJLy2 (ORCPT ); Fri, 10 Jan 2020 06:54:28 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 62F3F13A1; Fri, 10 Jan 2020 03:54:27 -0800 (PST) Received: from donnerap.arm.com (donnerap.cambridge.arm.com [10.1.197.44]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4A3493F534; Fri, 10 Jan 2020 03:54:26 -0800 (PST) From: Andre Przywara To: "David S . Miller" , Radhey Shyam Pandey Cc: Michal Simek , Robert Hancock , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/14] net: axienet: Fix DMA descriptor cleanup path Date: Fri, 10 Jan 2020 11:54:04 +0000 Message-Id: <20200110115415.75683-4-andre.przywara@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200110115415.75683-1-andre.przywara@arm.com> References: <20200110115415.75683-1-andre.przywara@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When axienet_dma_bd_init() bails out during the initialisation process, it might do so with parts of the structure already allocated and initialised, while other parts have not been touched yet. Before returning in this case, we call axienet_dma_bd_release(), which does not take care of this corner case. This is most obvious by the first loop happily dereferencing lp->rx_bd_v, which we actually check to be non NULL *afterwards*. Make sure we only unmap or free already allocated structures, by: - directly returning with -ENOMEM if nothing has been allocated at all - checking for lp->rx_bd_v to be non-NULL *before* using it - only unmapping allocated DMA RX regions This avoids NULL pointer dereferences when initialisation fails. Signed-off-by: Andre Przywara --- .../net/ethernet/xilinx/xilinx_axienet_main.c | 43 ++++++++++++------- 1 file changed, 28 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c index 97482cf093ce..7e90044cf2d9 100644 --- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c @@ -160,24 +160,37 @@ static void axienet_dma_bd_release(struct net_device *ndev) int i; struct axienet_local *lp = netdev_priv(ndev); + /* If we end up here, tx_bd_v must have been DMA allocated. */ + dma_free_coherent(ndev->dev.parent, + sizeof(*lp->tx_bd_v) * lp->tx_bd_num, + lp->tx_bd_v, + lp->tx_bd_p); + + if (!lp->rx_bd_v) + return; + for (i = 0; i < lp->rx_bd_num; i++) { - dma_unmap_single(ndev->dev.parent, lp->rx_bd_v[i].phys, - lp->max_frm_size, DMA_FROM_DEVICE); + /* A NULL skb means this descriptor has not been initialised + * at all. + */ + if (!lp->rx_bd_v[i].skb) + break; + dev_kfree_skb(lp->rx_bd_v[i].skb); - } - if (lp->rx_bd_v) { - dma_free_coherent(ndev->dev.parent, - sizeof(*lp->rx_bd_v) * lp->rx_bd_num, - lp->rx_bd_v, - lp->rx_bd_p); - } - if (lp->tx_bd_v) { - dma_free_coherent(ndev->dev.parent, - sizeof(*lp->tx_bd_v) * lp->tx_bd_num, - lp->tx_bd_v, - lp->tx_bd_p); + /* For each descriptor, we programmed cntrl with the (non-zero) + * descriptor size, after it had been successfully allocated. + * So a non-zero value in there means we need to unmap it. + */ + if (lp->rx_bd_v[i].cntrl) + dma_unmap_single(ndev->dev.parent, lp->rx_bd_v[i].phys, + lp->max_frm_size, DMA_FROM_DEVICE); } + + dma_free_coherent(ndev->dev.parent, + sizeof(*lp->rx_bd_v) * lp->rx_bd_num, + lp->rx_bd_v, + lp->rx_bd_p); } /** @@ -207,7 +220,7 @@ static int axienet_dma_bd_init(struct net_device *ndev) sizeof(*lp->tx_bd_v) * lp->tx_bd_num, &lp->tx_bd_p, GFP_KERNEL); if (!lp->tx_bd_v) - goto out; + return -ENOMEM; lp->rx_bd_v = dma_alloc_coherent(ndev->dev.parent, sizeof(*lp->rx_bd_v) * lp->rx_bd_num, -- 2.17.1