All of lore.kernel.org
 help / color / mirror / Atom feed
From: ira.weiny@intel.com
To: Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Dan Williams <dan.j.williams@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	"Shankar, Ravi V" <ravi.v.shankar@intel.com>,
	linux-kernel@vger.kernel.org
Subject: [PATCH V9 15/45] x86/pkeys: Introduce pks_write_pkrs()
Date: Thu, 10 Mar 2022 09:19:49 -0800	[thread overview]
Message-ID: <20220310172019.850939-16-ira.weiny@intel.com> (raw)
In-Reply-To: <20220310172019.850939-1-ira.weiny@intel.com>

From: Ira Weiny <ira.weiny@intel.com>

Writing to MSR's is inefficient.  Even though the underlying PKS
register, MSR_IA32_PKRS, is not serializing; writing to the MSR should
be avoided if possible.  Especially when updates are made in critical
paths such as the scheduler or the entry code.

Introduce pks_write_pkrs().  pks_write_pkrs() avoids writing
MSR_IA32_PKRS if the pkrs value has not changed for the current CPU.
Most of the callers are in a non-preemptable code path.  Therefore,
avoid calling preempt_{disable,enable}() to protect the per-cpu cache
and instead rely on outer calls for this protection.  Do the same with
checks to X86_FEATURE_PKS.

On startup, while unlikely, the PKS_INIT_VALUE may be 0.  This would
prevent pks_write_pkrs() from updating the MSR because of the initial
value of the per-cpu cache.  Therefore, keep the MSR write in
pks_setup() to ensure the MSR is initialized at least one time.

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
Changes for V9
	From Dave Hansen
		Update commit message with a bit more detail about why
			this optimization is needed
		Update the code comments as well.

Changes for V8
	From Thomas
		Remove get/put_cpu_ptr() and make this a 'lower level
		call.  This makes it preemption unsafe but it is called
		mostly where preemption is already disabled.  Add this
		as a predicate of the call and those calls which need to
		can disable preemption.
		Add lockdep assert for preemption
	Ensure MSR gets written even if the PKS_INIT_VALUE is 0.
	Completely re-write the commit message.
	s/write_pkrs/pks_write_pkrs/
	Split this off into a singular patch

Changes for V7
	Create a dynamic pkrs_initial_value in early init code.
	Clean up comments
	Add comment to macro guard
---
 arch/x86/mm/pkeys.c | 41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index f904376570f4..10521f1a292e 100644
--- a/arch/x86/mm/pkeys.c
+++ b/arch/x86/mm/pkeys.c
@@ -213,15 +213,56 @@ u32 pkey_update_pkval(u32 pkval, u8 pkey, u32 accessbits)
 
 #ifdef CONFIG_ARCH_ENABLE_SUPERVISOR_PKEYS
 
+static DEFINE_PER_CPU(u32, pkrs_cache);
+
+/*
+ * pks_write_pkrs() - Write the pkrs of the current CPU
+ * @new_pkrs: New value to write to the current CPU register
+ *
+ * Optimizes the MSR writes by maintaining a per cpu cache.
+ *
+ * Context: must be called with preemption disabled
+ * Context: must only be called if PKS is enabled
+ *
+ * It should also be noted that the underlying WRMSR(MSR_IA32_PKRS) is not
+ * serializing but still maintains ordering properties similar to WRPKRU.
+ * The current SDM section on PKRS needs updating but should be the same as
+ * that of WRPKRU.  Quote from the WRPKRU text:
+ *
+ *     WRPKRU will never execute transiently. Memory accesses
+ *     affected by PKRU register will not execute (even transiently)
+ *     until all prior executions of WRPKRU have completed execution
+ *     and updated the PKRU register.
+ */
+static inline void pks_write_pkrs(u32 new_pkrs)
+{
+	u32 pkrs = __this_cpu_read(pkrs_cache);
+
+	lockdep_assert_preemption_disabled();
+
+	if (pkrs != new_pkrs) {
+		__this_cpu_write(pkrs_cache, new_pkrs);
+		wrmsrl(MSR_IA32_PKRS, new_pkrs);
+	}
+}
+
 /*
  * PKS is independent of PKU and either or both may be supported on a CPU.
+ *
+ * Context: must be called with preemption disabled
  */
 void pks_setup(void)
 {
 	if (!cpu_feature_enabled(X86_FEATURE_PKS))
 		return;
 
+	/*
+	 * If the PKS_INIT_VALUE is 0 then pks_write_pkrs() will fail to
+	 * initialize the MSR.  Do a single write here to ensure the MSR is
+	 * written at least one time.
+	 */
 	wrmsrl(MSR_IA32_PKRS, PKS_INIT_VALUE);
+	pks_write_pkrs(PKS_INIT_VALUE);
 	cr4_set_bits(X86_CR4_PKS);
 }
 
-- 
2.35.1


  parent reply	other threads:[~2022-03-10 17:21 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-10 17:19 [PATCH V9 00/45] PKS/PMEM: Add Stray Write Protection ira.weiny
2022-03-10 17:19 ` [PATCH V9 01/45] entry: Create an internal irqentry_exit_cond_resched() call ira.weiny
2022-04-07  2:48   ` Ira Weiny
2022-03-10 17:19 ` [PATCH V9 02/45] Documentation/protection-keys: Clean up documentation for User Space pkeys ira.weiny
2022-03-10 17:19 ` [PATCH V9 03/45] x86/pkeys: Clarify PKRU_AD_KEY macro ira.weiny
2022-03-10 17:19 ` [PATCH V9 04/45] x86/pkeys: Make PKRU macros generic ira.weiny
2022-03-10 17:19 ` [PATCH V9 05/45] x86/fpu: Refactor arch_set_user_pkey_access() ira.weiny
2022-03-10 17:19 ` [PATCH V9 06/45] mm/pkeys: Add Kconfig options for PKS ira.weiny
2022-03-10 17:19 ` [PATCH V9 07/45] x86/pkeys: Add PKS CPU feature bit ira.weiny
2022-03-10 17:19 ` [PATCH V9 08/45] x86/fault: Adjust WARN_ON for pkey fault ira.weiny
2022-03-10 17:19 ` [PATCH V9 09/45] Documentation/pkeys: Add initial PKS documentation ira.weiny
2022-03-10 17:19 ` [PATCH V9 10/45] mm/pkeys: Provide for PKS key allocation ira.weiny
2022-03-10 17:19 ` [PATCH V9 11/45] x86/pkeys: Enable PKS on cpus which support it ira.weiny
2022-03-10 17:19 ` [PATCH V9 12/45] mm/pkeys: Define PKS page table macros ira.weiny
2022-03-10 17:19 ` [PATCH V9 13/45] mm/pkeys: PKS testing, add initial test code ira.weiny
2022-03-10 17:19 ` [PATCH V9 14/45] x86/selftests: Add test_pks ira.weiny
2022-03-10 17:19 ` ira.weiny [this message]
2022-03-10 17:19 ` [PATCH V9 16/45] x86/pkeys: Preserve the PKS MSR on context switch ira.weiny
2022-03-10 17:19 ` [PATCH V9 17/45] mm/pkeys: Introduce pks_set_readwrite() ira.weiny
2022-03-10 17:19 ` [PATCH V9 18/45] mm/pkeys: Introduce pks_set_noaccess() ira.weiny
2022-03-10 17:19 ` [PATCH V9 19/45] mm/pkeys: Introduce PKS fault callbacks ira.weiny
2022-03-10 17:19 ` [PATCH V9 20/45] mm/pkeys: PKS testing, add a fault call back ira.weiny
2022-03-10 17:19 ` [PATCH V9 21/45] mm/pkeys: PKS testing, add pks_set_*() tests ira.weiny
2022-03-10 17:19 ` [PATCH V9 22/45] mm/pkeys: PKS testing, test context switching ira.weiny
2022-03-10 17:19 ` [PATCH V9 23/45] x86/entry: Add auxiliary pt_regs space ira.weiny
2022-03-10 17:19 ` [PATCH V9 24/45] entry: Split up irqentry_exit_cond_resched() ira.weiny
2022-04-07  2:50   ` Ira Weiny
2022-03-10 17:19 ` [PATCH V9 25/45] entry: Add calls for save/restore auxiliary pt_regs ira.weiny
2022-03-10 17:20 ` [PATCH V9 26/45] x86/entry: Define arch_{save|restore}_auxiliary_pt_regs() ira.weiny
2022-03-10 17:20 ` [PATCH V9 27/45] x86/pkeys: Preserve PKRS MSR across exceptions ira.weiny
2022-03-10 17:20 ` [PATCH V9 28/45] x86/fault: Print PKS MSR on fault ira.weiny
2022-03-10 17:20 ` [PATCH V9 29/45] mm/pkeys: PKS testing, Add exception test ira.weiny
2022-03-10 17:20 ` [PATCH V9 30/45] mm/pkeys: Introduce pks_update_exception() ira.weiny
2022-03-10 17:20 ` [PATCH V9 31/45] mm/pkeys: PKS testing, test pks_update_exception() ira.weiny
2022-03-10 17:20 ` [PATCH V9 32/45] mm/pkeys: PKS testing, add test for all keys ira.weiny
2022-03-10 17:20 ` [PATCH V9 33/45] mm/pkeys: Add pks_available() ira.weiny
2022-03-10 17:20 ` [PATCH V9 34/45] memremap_pages: Add Kconfig for DEVMAP_ACCESS_PROTECTION ira.weiny
2022-03-10 17:20 ` [PATCH V9 35/45] memremap_pages: Introduce pgmap_protection_available() ira.weiny
2022-03-10 17:20 ` [PATCH V9 36/45] memremap_pages: Introduce a PGMAP_PROTECTION flag ira.weiny
2022-03-10 17:20 ` [PATCH V9 37/45] memremap_pages: Introduce devmap_protected() ira.weiny
2022-03-10 17:20 ` [PATCH V9 38/45] memremap_pages: Reserve a PKS pkey for eventual use by PMEM ira.weiny
2022-03-10 17:20 ` [PATCH V9 39/45] memremap_pages: Set PKS pkey in PTEs if requested ira.weiny
2022-03-10 17:20 ` [PATCH V9 40/45] memremap_pages: Define pgmap_set_{readwrite|noaccess}() calls ira.weiny
2022-03-10 17:20 ` [PATCH V9 41/45] memremap_pages: Add memremap.pks_fault_mode ira.weiny
2022-03-10 17:20 ` [PATCH V9 42/45] kmap: Make kmap work for devmap protected pages ira.weiny
2022-03-10 17:20 ` [PATCH V9 43/45] dax: Stray access protection for dax_direct_access() ira.weiny
2022-03-10 17:20 ` [PATCH V9 44/45] nvdimm/pmem: Enable stray access protection ira.weiny
2022-03-10 17:20 ` [PATCH V9 45/45] devdax: " ira.weiny
2022-03-31 17:13 ` [PATCH V9 00/45] PKS/PMEM: Add Stray Write Protection Ira Weiny

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220310172019.850939-16-ira.weiny@intel.com \
    --to=ira.weiny@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=fenghua.yu@intel.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ravi.v.shankar@intel.com \
    --cc=rick.p.edgecombe@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.