From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A396AC4338F for ; Tue, 10 Aug 2021 14:16:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B32B61154 for ; Tue, 10 Aug 2021 14:16:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242890AbhHJORP (ORCPT ); Tue, 10 Aug 2021 10:17:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:53248 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242468AbhHJOQC (ORCPT ); Tue, 10 Aug 2021 10:16:02 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D0C6560F02; Tue, 10 Aug 2021 14:15:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628604940; bh=m6MWtSt77nDnHiKnoKPOb31isqXnEqjQVfdpnzdnvIc=; h=From:To:Cc:Subject:Date:From; b=W7mTvf7qtl7AeDBHKBPdCN3HFpRC1UNPgdvw/1YhiK7nZFkzFTDXrsYYKD+lB7KZs NQOze7AWP+Zgd5gAbv5QBZ/PyNbR407KHwsXxztS3X2wAMRIZc7hcPJuwWS1Kke/GI ZjpyTQx+vMsPWv3fSU0r19P8yhaBWm3M6WHSvg9JCSYSGRDkX2ZGN1la6HRPtIcZy/ wv8NzzHMIG5RC4nNHsg/Q7YffYPf5rUYjs0pY45gL+cnETD4DDhxsP36K9ysLraF9X vGnTeIagXW/8tKneBRZHnj+bl1lsR8x4jdq0br59OP9qsAkJ1AozhgeeqqNOrA/l1V 9X1738vg6NpKA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Adrian Larumbe , Vinod Koul , Sasha Levin , dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH AUTOSEL 5.10 01/20] dmaengine: xilinx_dma: Fix read-after-free bug when terminating transfers Date: Tue, 10 Aug 2021 10:15:19 -0400 Message-Id: <20210810141538.3117707-1-sashal@kernel.org> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Adrian Larumbe [ Upstream commit 7dd2dd4ff9f3abda601f22b9d01441a0869d20d7 ] When user calls dmaengine_terminate_sync, the driver will clean up any remaining descriptors for all the pending or active transfers that had previously been submitted. However, this might happen whilst the tasklet is invoking the DMA callback for the last finished transfer, so by the time it returns and takes over the channel's spinlock, the list of completed descriptors it was traversing is no longer valid. This leads to a read-after-free situation. Fix it by signalling whether a user-triggered termination has happened by means of a boolean variable. Signed-off-by: Adrian Larumbe Link: https://lore.kernel.org/r/20210706234338.7696-3-adrian.martinezlarumbe@imgtec.com Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin --- drivers/dma/xilinx/xilinx_dma.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index 79777550a6ff..9ffdbeec436b 100644 --- a/drivers/dma/xilinx/xilinx_dma.c +++ b/drivers/dma/xilinx/xilinx_dma.c @@ -394,6 +394,7 @@ struct xilinx_dma_tx_descriptor { * @genlock: Support genlock mode * @err: Channel has errors * @idle: Check for channel idle + * @terminating: Check for channel being synchronized by user * @tasklet: Cleanup work after irq * @config: Device configuration info * @flush_on_fsync: Flush on Frame sync @@ -431,6 +432,7 @@ struct xilinx_dma_chan { bool genlock; bool err; bool idle; + bool terminating; struct tasklet_struct tasklet; struct xilinx_vdma_config config; bool flush_on_fsync; @@ -1049,6 +1051,13 @@ static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan) /* Run any dependencies, then free the descriptor */ dma_run_dependencies(&desc->async_tx); xilinx_dma_free_tx_descriptor(chan, desc); + + /* + * While we ran a callback the user called a terminate function, + * which takes care of cleaning up any remaining descriptors + */ + if (chan->terminating) + break; } spin_unlock_irqrestore(&chan->lock, flags); @@ -1965,6 +1974,8 @@ static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx) if (desc->cyclic) chan->cyclic = true; + chan->terminating = false; + spin_unlock_irqrestore(&chan->lock, flags); return cookie; @@ -2436,6 +2447,7 @@ static int xilinx_dma_terminate_all(struct dma_chan *dchan) xilinx_dma_chan_reset(chan); /* Remove and free all of the descriptors in the lists */ + chan->terminating = true; xilinx_dma_free_descriptors(chan); chan->idle = true; -- 2.30.2