From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EE14C4332F for ; Mon, 3 Oct 2022 14:18:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229740AbiJCOSD (ORCPT ); Mon, 3 Oct 2022 10:18:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbiJCOR6 (ORCPT ); Mon, 3 Oct 2022 10:17:58 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A91D4CA33; Mon, 3 Oct 2022 07:17:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664806677; x=1696342677; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=a8ELpBrWXU8FHNEsI98ijK8e0KakbcFtxr1oBgu/Jbc=; b=MSFjMRneOFc/dyshw4EAdcCepN7SezyOpMc9NT9B1eRBeLR+lU+yQHbc MBQRAmhrsXWZB94ps2xHLnU4JQW4IBxdJoCCOrL6RrfHEy8qfv2cxLTWe VUE8QkNISNZ2xJsUvIO3+6kR4sG0TTGkzopQ06szIXSKnEB3Gu8UkhBlB 7+XoTzCh+T+g1NiSE/cxWzWtuRDQBcs1yGiQriKGEnlvX9PAABYaavkBT v7RU4tW0oVAIPR6TsJ38f8yIaLQMz8admaWx2J9UDhdS3E91q2TovKN5k 4qVxWHcQQeCVBB4aZF3z8gBFzRdLl56N17Xz8J7f1KjkQYL7vT1vjhQ3/ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10489"; a="283013685" X-IronPort-AV: E=Sophos;i="5.93,365,1654585200"; d="scan'208";a="283013685" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2022 07:17:50 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10489"; a="798736983" X-IronPort-AV: E=Sophos;i="5.93,365,1654585200"; d="scan'208";a="798736983" Received: from bandrei-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.37.219]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2022 07:17:42 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 74B03104CE4; Mon, 3 Oct 2022 17:17:39 +0300 (+03) Date: Mon, 3 Oct 2022 17:17:39 +0300 From: "Kirill A . Shutemov" To: Rick Edgecombe Cc: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Weijiang Yang , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, Yu-cheng Yu , Christoph Hellwig Subject: Re: [PATCH v2 08/39] x86/mm: Remove _PAGE_DIRTY from kernel RO pages Message-ID: <20221003141739.qdgdgfr67cycadgs@box.shutemov.name> References: <20220929222936.14584-1-rick.p.edgecombe@intel.com> <20220929222936.14584-9-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220929222936.14584-9-rick.p.edgecombe@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 29, 2022 at 03:29:05PM -0700, Rick Edgecombe wrote: > From: Yu-cheng Yu > > Processors sometimes directly create Write=0,Dirty=1 PTEs. These PTEs are > created by software. One such case is that kernel read-only pages are > historically set up as Dirty. > > New processors that support Shadow Stack regard Write=0,Dirty=1 PTEs as > shadow stack pages. When CR4.CET=1 and IA32_S_CET.SH_STK_EN=1, some > instructions can write to such supervisor memory. The kernel does not set > IA32_S_CET.SH_STK_EN, but to reduce ambiguity between shadow stack and > regular Write=0 pages, removed Dirty=1 from any kernel Write=0 PTEs. > > Signed-off-by: Yu-cheng Yu > Co-developed-by: Rick Edgecombe > Signed-off-by: Rick Edgecombe > Cc: "H. Peter Anvin" > Cc: Kees Cook > Cc: Thomas Gleixner > Cc: Dave Hansen > Cc: Christoph Hellwig > Cc: Andy Lutomirski > Cc: Ingo Molnar > Cc: Borislav Petkov > Cc: Peter Zijlstra > > --- > > v2: > - Normalize PTE bit descriptions between patches > > arch/x86/include/asm/pgtable_types.h | 6 +++--- > arch/x86/mm/pat/set_memory.c | 2 +- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h > index aa174fed3a71..ff82237e7b6b 100644 > --- a/arch/x86/include/asm/pgtable_types.h > +++ b/arch/x86/include/asm/pgtable_types.h > @@ -192,10 +192,10 @@ enum page_cache_mode { > #define _KERNPG_TABLE (__PP|__RW| 0|___A| 0|___D| 0| 0| _ENC) > #define _PAGE_TABLE_NOENC (__PP|__RW|_USR|___A| 0|___D| 0| 0) > #define _PAGE_TABLE (__PP|__RW|_USR|___A| 0|___D| 0| 0| _ENC) > -#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX|___D| 0|___G) > -#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0|___D| 0|___G) > +#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX| 0| 0|___G) > +#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0| 0| 0|___G) > #define __PAGE_KERNEL_NOCACHE (__PP|__RW| 0|___A|__NX|___D| 0|___G| __NC) > -#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX|___D| 0|___G) > +#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX| 0| 0|___G) > #define __PAGE_KERNEL_LARGE (__PP|__RW| 0|___A|__NX|___D|_PSE|___G) > #define __PAGE_KERNEL_LARGE_EXEC (__PP|__RW| 0|___A| 0|___D|_PSE|___G) > #define __PAGE_KERNEL_WP (__PP|__RW| 0|___A|__NX|___D| 0|___G| __WP) > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index 1abd5438f126..ed9193b469ba 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -1977,7 +1977,7 @@ int set_memory_nx(unsigned long addr, int numpages) > > int set_memory_ro(unsigned long addr, int numpages) > { > - return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW), 0); > + return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW | _PAGE_DIRTY), 0); > } Hm. Do we need to modify also *_wrprotect() helpers to clear dirty bit? I guess not (at least without a lot of audit), as we risk loosing dirty bit on page cache pages. But why is it safe? Do we only care about about kernel PTEs here? Userspace Write=0,Dirty=1 PTEs handled as before? > int set_memory_rw(unsigned long addr, int numpages) > -- > 2.17.1 > -- Kiryl Shutsemau / Kirill A. Shutemov