From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22EB1C433F5 for ; Thu, 4 Nov 2021 02:32:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1E646113B for ; Thu, 4 Nov 2021 02:32:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230011AbhKDCfG (ORCPT ); Wed, 3 Nov 2021 22:35:06 -0400 Received: from mailgw01.mediatek.com ([60.244.123.138]:59016 "EHLO mailgw01.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S229541AbhKDCfC (ORCPT ); Wed, 3 Nov 2021 22:35:02 -0400 X-UUID: b5a848c057d7456daea1530d568a627c-20211104 X-UUID: b5a848c057d7456daea1530d568a627c-20211104 Received: from mtkexhb02.mediatek.inc [(172.21.101.103)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 541227360; Thu, 04 Nov 2021 10:32:23 +0800 Received: from mtkmbs10n2.mediatek.inc (172.21.101.183) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Thu, 4 Nov 2021 10:32:22 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs10n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.3 via Frontend Transport; Thu, 4 Nov 2021 10:32:22 +0800 From: Walter Wu To: Christoph Hellwig , Marek Szyprowski , Robin Murphy , "Matthias Brugger" , Ard Biesheuvel , "Andrew Morton" CC: , , , wsd_upstream , , Walter Wu Subject: [PATCH v2] dma-direct: improve DMA_ATTR_NO_KERNEL_MAPPING Date: Thu, 4 Nov 2021 10:32:21 +0800 Message-ID: <20211104023221.16391-1-walter-zh.wu@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 Content-Type: text/plain X-MTK: N Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When the allocated buffers use dma coherent memory with DMA_ATTR_NO_KERNEL_MAPPING, then its kernel mapping is exist. The caller use that DMA_ATTR_NO_KERNEL_MAPPING mean they can't rely on kernel mapping, but removing kernel mapping have some improvements. The improvements are: a) Security improvement. In some cases, we don't hope the allocated buffer to be read by cpu speculative execution. Therefore, it need to remove kernel mapping, this patch improve DMA_ATTR_NO_KERNEL_MAPPING to remove a page from kernel mapping in order that cpu doesn't read it. b) Debugging improvement. If the allocated buffer map into user space, only access it in user space, nobody can access it in kernel space, so we can use this patch to see who try to access it in kernel space. This patch only works if the memory is mapping at page granularity in the linear region, so that current only support for ARM64. Signed-off-by: Walter Wu Suggested-by: Christoph Hellwig Suggested-by: Ard Biesheuvel Cc: Christoph Hellwig Cc: Marek Szyprowski Cc: Robin Murphy Cc: Matthias Brugger Cc: Ard Biesheuvel Cc: Andrew Morton --- v2: 1. modify commit message and fix the removing mapping for arm64 2. fix build error for x86 --- include/linux/set_memory.h | 5 +++++ kernel/dma/direct.c | 13 +++++++++++++ 2 files changed, 18 insertions(+) diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index f36be5166c19..6c7d1683339c 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -7,11 +7,16 @@ #ifdef CONFIG_ARCH_HAS_SET_MEMORY #include + +#ifndef CONFIG_RODATA_FULL_DEFAULT_ENABLED +static inline int set_memory_valid(unsigned long addr, int numpages, int enable) { return 0; } +#endif #else static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; } static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; } static inline int set_memory_x(unsigned long addr, int numpages) { return 0; } static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } +static inline int set_memory_valid(unsigned long addr, int numpages, int enable) { return 0; } #endif #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 4c6c5e0635e3..d5d03b51b708 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -155,6 +155,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, struct page *page; void *ret; int err; + unsigned long kaddr; size = PAGE_ALIGN(size); if (attrs & DMA_ATTR_NO_WARN) @@ -169,6 +170,11 @@ void *dma_direct_alloc(struct device *dev, size_t size, if (!PageHighMem(page)) arch_dma_prep_coherent(page, size); *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); + if (IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED)) { + kaddr = (unsigned long)phys_to_virt(dma_to_phys(dev, *dma_handle)); + /* page remove kernel mapping for arm64 */ + set_memory_valid(kaddr, size >> PAGE_SHIFT, 0); + } /* return the page pointer as the opaque cookie */ return page; } @@ -275,9 +281,16 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { unsigned int page_order = get_order(size); + unsigned long kaddr; if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) { + if (IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED)) { + size = PAGE_ALIGN(size); + kaddr = (unsigned long)phys_to_virt(dma_to_phys(dev, dma_addr)); + /* page create kernel mapping for arm64 */ + set_memory_valid(kaddr, size >> PAGE_SHIFT, 1); + } /* cpu_addr is a struct page cookie, not a kernel address */ dma_free_contiguous(dev, cpu_addr, size); return; -- 2.18.0