From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A743FC433F5 for ; Fri, 25 Mar 2022 15:13:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355937AbiCYPMp (ORCPT ); Fri, 25 Mar 2022 11:12:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1359631AbiCYPMf (ORCPT ); Fri, 25 Mar 2022 11:12:35 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2833E63BCE; Fri, 25 Mar 2022 08:09:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B36AAB828FA; Fri, 25 Mar 2022 15:09:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B322C340E9; Fri, 25 Mar 2022 15:08:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1648220940; bh=uxWT+gpwfFvwIagwiF9yzcj0LTYfIYKfJxh3WqRuPQI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Wrww1XxoWF18uxtA3CfMDQe1Xkd2+hH9Fnwt9z0UmjMI617RV2DZLbjGwvqyWDNvc eSYpYj4zo+84dTj0J+yG5QKDfgqZC20nXbuPxXgJeleDvL1oxdxW2PYzddJI3wq88E FYl/rHftI0PdRWmZXnBmSLZ0YM1i3r4HGMatsuEg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Halil Pasic , Christoph Hellwig , Linus Torvalds Subject: [PATCH 5.10 11/38] swiotlb: rework "fix info leak with DMA_FROM_DEVICE" Date: Fri, 25 Mar 2022 16:04:55 +0100 Message-Id: <20220325150420.085364078@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220325150419.757836392@linuxfoundation.org> References: <20220325150419.757836392@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Halil Pasic commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13 upstream. Unfortunately, we ended up merging an old version of the patch "fix info leak with DMA_FROM_DEVICE" instead of merging the latest one. Christoph (the swiotlb maintainer), he asked me to create an incremental fix (after I have pointed this out the mix up, and asked him for guidance). So here we go. The main differences between what we got and what was agreed are: * swiotlb_sync_single_for_device is also required to do an extra bounce * We decided not to introduce DMA_ATTR_OVERWRITE until we have exploiters * The implantation of DMA_ATTR_OVERWRITE is flawed: DMA_ATTR_OVERWRITE must take precedence over DMA_ATTR_SKIP_CPU_SYNC Thus this patch removes DMA_ATTR_OVERWRITE, and makes swiotlb_sync_single_for_device() bounce unconditionally (that is, also when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale data from the swiotlb buffer. Let me note, that if the size used with dma_sync_* API is less than the size used with dma_[un]map_*, under certain circumstances we may still end up with swiotlb not being transparent. In that sense, this is no perfect fix either. To get this bullet proof, we would have to bounce the entire mapping/bounce buffer. For that we would have to figure out the starting address, and the size of the mapping in swiotlb_sync_single_for_device(). While this does seem possible, there seems to be no firm consensus on how things are supposed to work. Signed-off-by: Halil Pasic Fixes: ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE") Cc: stable@vger.kernel.org Reviewed-by: Christoph Hellwig Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- Documentation/core-api/dma-attributes.rst | 8 -------- include/linux/dma-mapping.h | 8 -------- kernel/dma/swiotlb.c | 25 ++++++++++++++++--------- 3 files changed, 16 insertions(+), 25 deletions(-) --- a/Documentation/core-api/dma-attributes.rst +++ b/Documentation/core-api/dma-attributes.rst @@ -130,11 +130,3 @@ accesses to DMA buffers in both privileg subsystem that the buffer is fully accessible at the elevated privilege level (and ideally inaccessible or at least read-only at the lesser-privileged levels). - -DMA_ATTR_OVERWRITE ------------------- - -This is a hint to the DMA-mapping subsystem that the device is expected to -overwrite the entire mapped size, thus the caller does not require any of the -previous buffer contents to be preserved. This allows bounce-buffering -implementations to optimise DMA_FROM_DEVICE transfers. --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -62,14 +62,6 @@ #define DMA_ATTR_PRIVILEGED (1UL << 9) /* - * This is a hint to the DMA-mapping subsystem that the device is expected - * to overwrite the entire mapped size, thus the caller does not require any - * of the previous buffer contents to be preserved. This allows - * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers. - */ -#define DMA_ATTR_OVERWRITE (1UL << 10) - -/* * A dma_addr_t can hold any valid DMA or bus address for the platform. It can * be given to a device to use as a DMA source or target. It is specific to a * given device and there may be a translation between the CPU physical address --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -597,10 +597,14 @@ phys_addr_t swiotlb_tbl_map_single(struc io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i); tlb_addr = slot_addr(io_tlb_start, index) + offset; - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE || - dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); + /* + * When dir == DMA_FROM_DEVICE we could omit the copy from the orig + * to the tlb buffer, if we knew for sure the device will + * overwirte the entire current content. But we don't. Thus + * unconditional bounce may prevent leaking swiotlb content (i.e. + * kernel memory) to user-space. + */ + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); return tlb_addr; } @@ -680,11 +684,14 @@ void swiotlb_tbl_sync_single(struct devi BUG_ON(dir != DMA_TO_DEVICE); break; case SYNC_FOR_DEVICE: - if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, - size, DMA_TO_DEVICE); - else - BUG_ON(dir != DMA_FROM_DEVICE); + /* + * Unconditional bounce is necessary to avoid corruption on + * sync_*_for_cpu or dma_ummap_* when the device didn't + * overwrite the whole lengt of the bounce buffer. + */ + swiotlb_bounce(orig_addr, tlb_addr, + size, DMA_TO_DEVICE); + BUG_ON(!valid_dma_direction(dir)); break; default: BUG();