From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2703AC4338F for ; Mon, 16 Aug 2021 16:08:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5CAA60F58 for ; Mon, 16 Aug 2021 16:08:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B5CAA60F58 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=alien8.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 489AF8D0002; Mon, 16 Aug 2021 12:08:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 412E66B0072; Mon, 16 Aug 2021 12:08:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B4558D0002; Mon, 16 Aug 2021 12:08:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id 1292C6B006C for ; Mon, 16 Aug 2021 12:08:26 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B1AAE22AA1 for ; Mon, 16 Aug 2021 16:08:25 +0000 (UTC) X-FDA: 78481426170.05.B9D4361 Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197]) by imf11.hostedemail.com (Postfix) with ESMTP id 62F5CF0035CA for ; Mon, 16 Aug 2021 16:08:24 +0000 (UTC) Received: from zn.tnic (p200300ec2f08b5007042c35e67cc1097.dip0.t-ipconnect.de [IPv6:2003:ec:2f08:b500:7042:c35e:67cc:1097]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id E82191EC03D5; Mon, 16 Aug 2021 18:00:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim; t=1629129658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=fWWTWO1JV4RsH463+53B7fbAW0qIhoTlvqX4vBT05vw=; b=BRIM2XbGCS8PHu4uS9LjB3oJPIwmGugzEX2I9WKF3Kk3KgHt7XBxSQY7R/h21OIthbEVbt DxcGj5ZGjnftTImnuk4LNOX/n4WudjZC4XiRhQdco5iHZwIha2bu0OFEbAvx4QipOjrqtA ris5Up7VSOO60XbJjm+5MKqK4cEzRHg= Date: Mon, 16 Aug 2021 18:01:37 +0200 From: Borislav Petkov To: Yu-cheng Yu Cc: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang , Rick P Edgecombe , "Kirill A . Shutemov" Subject: Re: [PATCH v28 12/32] x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for transition from _PAGE_DIRTY to _PAGE_COW Message-ID: References: <20210722205219.7934-1-yu-cheng.yu@intel.com> <20210722205219.7934-13-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210722205219.7934-13-yu-cheng.yu@intel.com> Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=alien8.de header.s=dkim header.b=BRIM2XbG; spf=pass (imf11.hostedemail.com: domain of bp@alien8.de designates 5.9.137.197 as permitted sender) smtp.mailfrom=bp@alien8.de; dmarc=pass (policy=none) header.from=alien8.de X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 62F5CF0035CA X-Stat-Signature: jeth7bx3ejwo51348ozwhrj4ky983qmg X-HE-Tag: 1629130104-370050 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jul 22, 2021 at 01:51:59PM -0700, Yu-cheng Yu wrote: > When Shadow Stack is introduced, [R/O + _PAGE_DIRTY] PTE is reserved for > shadow stack. Copy-on-write PTEs have [R/O + _PAGE_COW]. > > When a PTE goes from [R/W + _PAGE_DIRTY] to [R/O + _PAGE_COW], it could > become a transient shadow stack PTE in two cases: > > The first case is that some processors can start a write but end up seeing > a read-only PTE by the time they get to the Dirty bit, creating a transient > shadow stack PTE. However, this will not occur on processors supporting > Shadow Stack, and a TLB flush is not necessary. > > The second case is that when _PAGE_DIRTY is replaced with _PAGE_COW non- > atomically, a transient shadow stack PTE can be created as a result. > Thus, prevent that with cmpxchg. > > Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many > insights to the issue. Jann Horn provided the cmpxchg solution. > > Signed-off-by: Yu-cheng Yu > Reviewed-by: Kees Cook > Reviewed-by: Kirill A. Shutemov > --- > arch/x86/include/asm/pgtable.h | 36 ++++++++++++++++++++++++++++++++++ > 1 file changed, 36 insertions(+) > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index cf7316e968df..df4ce715560a 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -1278,6 +1278,24 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, > static inline void ptep_set_wrprotect(struct mm_struct *mm, > unsigned long addr, pte_t *ptep) > { > + /* > + * If Shadow Stack is enabled, pte_wrprotect() moves _PAGE_DIRTY > + * to _PAGE_COW (see comments at pte_wrprotect()). > + * When a thread reads a RW=1, Dirty=0 PTE and before changing it > + * to RW=0, Dirty=0, another thread could have written to the page > + * and the PTE is RW=1, Dirty=1 now. Use try_cmpxchg() to detect > + * PTE changes and update old_pte, then try again. > + */ > + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { > + pte_t old_pte, new_pte; > + > + old_pte = READ_ONCE(*ptep); > + do { > + new_pte = pte_wrprotect(old_pte); > + } while (!try_cmpxchg(&ptep->pte, &old_pte.pte, new_pte.pte)); > + > + return; > + } > clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte); > } > > @@ -1322,6 +1340,24 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, > static inline void pmdp_set_wrprotect(struct mm_struct *mm, > unsigned long addr, pmd_t *pmdp) > { > + /* > + * If Shadow Stack is enabled, pmd_wrprotect() moves _PAGE_DIRTY > + * to _PAGE_COW (see comments at pmd_wrprotect()). > + * When a thread reads a RW=1, Dirty=0 PMD and before changing it > + * to RW=0, Dirty=0, another thread could have written to the page > + * and the PMD is RW=1, Dirty=1 now. Use try_cmpxchg() to detect > + * PMD changes and update old_pmd, then try again. > + */ > + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { > + pmd_t old_pmd, new_pmd; > + > + old_pmd = READ_ONCE(*pmdp); > + do { > + new_pmd = pmd_wrprotect(old_pmd); > + } while (!try_cmpxchg((pmdval_t *)pmdp, (pmdval_t *)&old_pmd, pmd_val(new_pmd))); Why is that try_cmpxchg() call doing casting to its operands instead of like the pte one above? I.e., why aren't you doing here the same thing as above: ... } while (!try_cmpxchg(&pmdp->pmd, &old_pmd.pmd, new_pmd.pmd)); ? Thx. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette