From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C121EC41535 for ; Tue, 5 Apr 2022 11:46:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379265AbiDELkj (ORCPT ); Tue, 5 Apr 2022 07:40:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354468AbiDEKOU (ORCPT ); Tue, 5 Apr 2022 06:14:20 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 595066A421; Tue, 5 Apr 2022 03:00:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EA70361673; Tue, 5 Apr 2022 10:00:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 07D4CC385A2; Tue, 5 Apr 2022 10:00:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649152825; bh=tgcIH8ngRpuHjfV9wDFrK09HvAFQoY9C83lectSVLG4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GKHP+DhnKkl1zwsoRvwhCRnzaUxpvvtbJTkl6Pm4tXHLGMXmSDO+L74nSt5cF1ftx 7W47oRb3dC0qXjlP7wO0ajObkVI8hw43dUG33C9tN/LwF8poj9UK4JOh6Q+YBrcIDf MEa71GzEd1n35n1m4mZoJjH66130LTp9gA3EiMNs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, David Stevens , Joerg Roedel Subject: [PATCH 5.15 909/913] iommu/dma: Account for min_align_mask w/swiotlb Date: Tue, 5 Apr 2022 09:32:51 +0200 Message-Id: <20220405070407.067298408@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405070339.801210740@linuxfoundation.org> References: <20220405070339.801210740@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Stevens commit 2cbc61a1b1665c84282dbf2b1747ffa0b6248639 upstream. Pass the non-aligned size to __iommu_dma_map when using swiotlb bounce buffers in iommu_dma_map_page, to account for min_align_mask. To deal with granule alignment, __iommu_dma_map maps iova_align(size + iova_off) bytes starting at phys - iova_off. If iommu_dma_map_page passes aligned size when using swiotlb, then this becomes iova_align(iova_align(orig_size) + iova_off). Normally iova_off will be zero when using swiotlb. However, this is not the case for devices that set min_align_mask. When iova_off is non-zero, __iommu_dma_map ends up mapping an extra page at the end of the buffer. Beyond just being a security issue, the extra page is not cleaned up by __iommu_dma_unmap. This causes problems when the IOVA is reused, due to collisions in the iommu driver. Just passing the original size is sufficient, since __iommu_dma_map will take care of granule alignment. Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask") Signed-off-by: David Stevens Link: https://lore.kernel.org/r/20210929023300.335969-8-stevensd@google.com Signed-off-by: Joerg Roedel Signed-off-by: Greg Kroah-Hartman --- drivers/iommu/dma-iommu.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -806,7 +806,6 @@ static dma_addr_t iommu_dma_map_page(str struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; - size_t aligned_size = size; dma_addr_t iova, dma_mask = dma_get_mask(dev); /* @@ -815,7 +814,7 @@ static dma_addr_t iommu_dma_map_page(str */ if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) { void *padding_start; - size_t padding_size; + size_t padding_size, aligned_size; aligned_size = iova_align(iovad, size); phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size, @@ -840,7 +839,7 @@ static dma_addr_t iommu_dma_map_page(str if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); - iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); + iova = __iommu_dma_map(dev, phys, size, prot, dma_mask); if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys)) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); return iova;