From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8BC8C3526B for ; Fri, 18 Dec 2020 22:44:52 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7451923B01 for ; Fri, 18 Dec 2020 22:44:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7451923B01 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 2F337100EB34C; Fri, 18 Dec 2020 14:44:52 -0800 (PST) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=2a0a:51c0:0:12e:550::1; helo=galois.linutronix.de; envelope-from=tglx@linutronix.de; receiver= Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id BB3AC100ED48D for ; Fri, 18 Dec 2020 14:44:49 -0800 (PST) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1608331487; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=RSvGwnCoAalQdHR+Ln4Q2sS79B4UxqMN+RD1afn6GC4=; b=WLPlBrXYqR/QHDC+jzhTogdMRUXRoXK7xVn6httZGE/POOtKQOICxm2Vx7GbejEFjraKCp 8FtPz4HV4CtlPx6lsNBoWhtimmNasXfdHmGCB3HmbMpO6jtChbcJzok72ZwXgYgiPqi/+W TVo3o8mEE5wxXl7oVY4N8wC2y8Fd1c5dq3sp22pGFacd1ybmBMogWCrnWhS0YH/Yv6bOLv jy76FaA3txWF3rrqdIiM/E1ZuVFE2VhdGDiw8WjIanN7S0vT61aUViIeTCTFVaKx6xUytw RvdtJ7Gjd4DRTiq5bs+kLdz+fgJOLnKr2p1wVbJxp5o5kFLaeENoshm8pvRMtQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1608331487; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=RSvGwnCoAalQdHR+Ln4Q2sS79B4UxqMN+RD1afn6GC4=; b=adzlxci9JislJbxDWr9PbAbA3shNaiVRFjiwlYUSAoyCp+6io/bqTAyHyqeJm/zADQxY5/ Mn3qg+DY7xHFSeBA== To: Dan Williams Subject: Re: [PATCH V3 04/10] x86/pks: Preserve the PKRS MSR on context switch In-Reply-To: References: <20201106232908.364581-1-ira.weiny@intel.com> <20201106232908.364581-5-ira.weiny@intel.com> <871rfoscz4.fsf@nanos.tec.linutronix.de> <87mtycqcjf.fsf@nanos.tec.linutronix.de> <878s9vqkrk.fsf@nanos.tec.linutronix.de> <875z4yrfhr.fsf@nanos.tec.linutronix.de> Date: Fri, 18 Dec 2020 23:44:47 +0100 Message-ID: <87wnxepwdc.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Message-ID-Hash: QZBXQJAZYLWTH47YVGN6R2EX7VWIEP3D X-Message-ID-Hash: QZBXQJAZYLWTH47YVGN6R2EX7VWIEP3D X-MailFrom: tglx@linutronix.de X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra , Dave Hansen , Fenghua Yu , X86 ML , Linux Kernel Mailing List , Andrew Morton , Linux Doc Mailing List , linux-nvdimm , Linux MM , linux-kselftest@vger.kernel.org, Greg KH X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit On Fri, Dec 18 2020 at 13:58, Dan Williams wrote: > On Fri, Dec 18, 2020 at 1:06 PM Thomas Gleixner wrote: >> kmap_local() is fine. That can work automatically because it's strict >> local to the context which does the mapping. >> >> kmap() is dubious because it's a 'global' mapping as dictated per >> HIGHMEM. So doing the RELAXED mode for kmap() is sensible I think to >> identify cases where the mapped address is really handed to a different >> execution context. We want to see those cases and analyse whether this >> can't be solved in a different way. That's why I suggested to do a >> warning in that case. >> >> Also vs. the DAX use case I really meant the code in fs/dax and >> drivers/dax/ itself which is handling this via dax_read_[un]lock. >> >> Does that make more sense? > > Yup, got it. The dax code can be precise wrt to PKS in a way that > kmap_local() cannot. Which makes me wonder whether we should have kmap_local_for_read() or something like that, which could be obviously only be RO enforced for the real HIGHMEM case or the (for now x86 only) enforced kmap_local() debug mechanics on 64bit. So for the !highmem case it would not magically make the existing kernel mapping RO, but this could be forwarded to the PKS protection. Aside of that it's a nice annotation in the code. That could be used right away for all the kmap[_atomic] -> kmap_local conversions. Thanks, tglx --- include/linux/highmem-internal.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -32,6 +32,10 @@ static inline void kmap_flush_tlb(unsign #define kmap_prot PAGE_KERNEL #endif +#ifndef kmap_prot_to +#define kmap_prot PAGE_KERNEL_RO +#endif + void *kmap_high(struct page *page); void kunmap_high(struct page *page); void __kmap_flush_unused(void); @@ -73,6 +77,11 @@ static inline void *kmap_local_page(stru return __kmap_local_page_prot(page, kmap_prot); } +static inline void *kmap_local_page_for_read(struct page *page) +{ + return __kmap_local_page_prot(page, kmap_prot_ro); +} + static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) { return __kmap_local_page_prot(page, prot); @@ -169,6 +178,11 @@ static inline void *kmap_local_page_prot { return kmap_local_page(page); } + +static inline void *kmap_local_page_for_read(struct page *page) +{ + return kmap_local_page(page); +} static inline void *kmap_local_pfn(unsigned long pfn) { _______________________________________________ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-leave@lists.01.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37888C3526B for ; Fri, 18 Dec 2020 22:45:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 09E1623B01 for ; Fri, 18 Dec 2020 22:45:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726339AbgLRWpa (ORCPT ); Fri, 18 Dec 2020 17:45:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725836AbgLRWp3 (ORCPT ); Fri, 18 Dec 2020 17:45:29 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1C86C0617A7; Fri, 18 Dec 2020 14:44:49 -0800 (PST) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1608331487; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=RSvGwnCoAalQdHR+Ln4Q2sS79B4UxqMN+RD1afn6GC4=; b=WLPlBrXYqR/QHDC+jzhTogdMRUXRoXK7xVn6httZGE/POOtKQOICxm2Vx7GbejEFjraKCp 8FtPz4HV4CtlPx6lsNBoWhtimmNasXfdHmGCB3HmbMpO6jtChbcJzok72ZwXgYgiPqi/+W TVo3o8mEE5wxXl7oVY4N8wC2y8Fd1c5dq3sp22pGFacd1ybmBMogWCrnWhS0YH/Yv6bOLv jy76FaA3txWF3rrqdIiM/E1ZuVFE2VhdGDiw8WjIanN7S0vT61aUViIeTCTFVaKx6xUytw RvdtJ7Gjd4DRTiq5bs+kLdz+fgJOLnKr2p1wVbJxp5o5kFLaeENoshm8pvRMtQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1608331487; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=RSvGwnCoAalQdHR+Ln4Q2sS79B4UxqMN+RD1afn6GC4=; b=adzlxci9JislJbxDWr9PbAbA3shNaiVRFjiwlYUSAoyCp+6io/bqTAyHyqeJm/zADQxY5/ Mn3qg+DY7xHFSeBA== To: Dan Williams Cc: "Weiny\, Ira" , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra , Dave Hansen , Fenghua Yu , X86 ML , Linux Kernel Mailing List , Andrew Morton , Linux Doc Mailing List , linux-nvdimm , Linux MM , linux-kselftest@vger.kernel.org, Greg KH Subject: Re: [PATCH V3 04/10] x86/pks: Preserve the PKRS MSR on context switch In-Reply-To: References: <20201106232908.364581-1-ira.weiny@intel.com> <20201106232908.364581-5-ira.weiny@intel.com> <871rfoscz4.fsf@nanos.tec.linutronix.de> <87mtycqcjf.fsf@nanos.tec.linutronix.de> <878s9vqkrk.fsf@nanos.tec.linutronix.de> <875z4yrfhr.fsf@nanos.tec.linutronix.de> Date: Fri, 18 Dec 2020 23:44:47 +0100 Message-ID: <87wnxepwdc.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 18 2020 at 13:58, Dan Williams wrote: > On Fri, Dec 18, 2020 at 1:06 PM Thomas Gleixner wrote: >> kmap_local() is fine. That can work automatically because it's strict >> local to the context which does the mapping. >> >> kmap() is dubious because it's a 'global' mapping as dictated per >> HIGHMEM. So doing the RELAXED mode for kmap() is sensible I think to >> identify cases where the mapped address is really handed to a different >> execution context. We want to see those cases and analyse whether this >> can't be solved in a different way. That's why I suggested to do a >> warning in that case. >> >> Also vs. the DAX use case I really meant the code in fs/dax and >> drivers/dax/ itself which is handling this via dax_read_[un]lock. >> >> Does that make more sense? > > Yup, got it. The dax code can be precise wrt to PKS in a way that > kmap_local() cannot. Which makes me wonder whether we should have kmap_local_for_read() or something like that, which could be obviously only be RO enforced for the real HIGHMEM case or the (for now x86 only) enforced kmap_local() debug mechanics on 64bit. So for the !highmem case it would not magically make the existing kernel mapping RO, but this could be forwarded to the PKS protection. Aside of that it's a nice annotation in the code. That could be used right away for all the kmap[_atomic] -> kmap_local conversions. Thanks, tglx --- include/linux/highmem-internal.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -32,6 +32,10 @@ static inline void kmap_flush_tlb(unsign #define kmap_prot PAGE_KERNEL #endif +#ifndef kmap_prot_to +#define kmap_prot PAGE_KERNEL_RO +#endif + void *kmap_high(struct page *page); void kunmap_high(struct page *page); void __kmap_flush_unused(void); @@ -73,6 +77,11 @@ static inline void *kmap_local_page(stru return __kmap_local_page_prot(page, kmap_prot); } +static inline void *kmap_local_page_for_read(struct page *page) +{ + return __kmap_local_page_prot(page, kmap_prot_ro); +} + static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) { return __kmap_local_page_prot(page, prot); @@ -169,6 +178,11 @@ static inline void *kmap_local_page_prot { return kmap_local_page(page); } + +static inline void *kmap_local_page_for_read(struct page *page) +{ + return kmap_local_page(page); +} static inline void *kmap_local_pfn(unsigned long pfn) {