linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Andrew Morton <akpm@linux-foundation.org>,
	x86@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Borislav Petkov <bp@alien8.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Andy Lutomirski <luto@amacapital.net>,
	David Howells <dhowells@redhat.com>
Cc: Kees Cook <keescook@chromium.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Kai Huang <kai.huang@linux.intel.com>,
	Jacob Pan <jacob.jun.pan@linux.intel.com>,
	Alison Schofield <alison.schofield@intel.com>,
	linux-mm@kvack.org, kvm@vger.kernel.org,
	keyrings@vger.kernel.org, linux-kernel@vger.kernel.org,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCHv2 18/59] x86/mm: Calculate direct mapping size
Date: Wed, 31 Jul 2019 18:07:32 +0300	[thread overview]
Message-ID: <20190731150813.26289-19-kirill.shutemov@linux.intel.com> (raw)
In-Reply-To: <20190731150813.26289-1-kirill.shutemov@linux.intel.com>

The kernel needs to have a way to access encrypted memory. We have two
option on how approach it:

 - Create temporary mappings every time kernel needs access to encrypted
   memory. That's basically brings highmem and its overhead back.

 - Create multiple direct mappings, one per-KeyID. In this setup we
   don't need to create temporary mappings on the fly -- encrypted
   memory is permanently available in kernel address space.

We take the second approach as it has lower overhead.

It's worth noting that with per-KeyID direct mappings compromised kernel
would give access to decrypted data right away without additional tricks
to get memory mapped with the correct KeyID.

Per-KeyID mappings require a lot more virtual address space. On 4-level
machine with 64 KeyIDs we max out 46-bit virtual address space dedicated
for direct mapping with 1TiB of RAM. Given that we round up any
calculation on direct mapping size to 1TiB, we effectively claim all
46-bit address space for direct mapping on such machine regardless of
RAM size.

Increased usage of virtual address space has implications for KASLR:
we have less space for randomization. With 64 TiB claimed for direct
mapping with 4-level we left with 27 TiB of entropy to place
page_offset_base, vmalloc_base and vmemmap_base.

5-level paging provides much wider virtual address space and KASLR
doesn't suffer significantly from per-KeyID direct mappings.

It's preferred to run MKTME with 5-level paging.

A direct mapping for each KeyID will be put next to each other in the
virtual address space. We need to have a way to find boundaries of
direct mapping for particular KeyID.

The new variable direct_mapping_size specifies the size of direct
mapping. With the value, it's trivial to find direct mapping for
KeyID-N: PAGE_OFFSET + N * direct_mapping_size.

Size of direct mapping is calculated during KASLR setup. If KALSR is
disabled it happens during MKTME initialization.

With MKTME size of direct mapping has to be power-of-2. It makes
implementation of __pa() efficient.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 Documentation/x86/x86_64/mm.rst |  4 +++
 arch/x86/include/asm/page_32.h  |  1 +
 arch/x86/include/asm/page_64.h  |  2 ++
 arch/x86/include/asm/setup.h    |  6 ++++
 arch/x86/kernel/head64.c        |  4 +++
 arch/x86/kernel/setup.c         |  3 ++
 arch/x86/mm/init_64.c           | 58 +++++++++++++++++++++++++++++++++
 arch/x86/mm/kaslr.c             | 11 +++++--
 8 files changed, 86 insertions(+), 3 deletions(-)

diff --git a/Documentation/x86/x86_64/mm.rst b/Documentation/x86/x86_64/mm.rst
index 267fc4808945..7978afe6c396 100644
--- a/Documentation/x86/x86_64/mm.rst
+++ b/Documentation/x86/x86_64/mm.rst
@@ -140,6 +140,10 @@ The direct mapping covers all memory in the system up to the highest
 memory address (this means in some cases it can also include PCI memory
 holes).
 
+With MKTME, we have multiple direct mappings. One per-KeyID. They are put
+next to each other. PAGE_OFFSET + N * direct_mapping_size can be used to
+find direct mapping for KeyID-N.
+
 vmalloc space is lazily synchronized into the different PML4/PML5 pages of
 the processes using the page fault handler, with init_top_pgt as
 reference.
diff --git a/arch/x86/include/asm/page_32.h b/arch/x86/include/asm/page_32.h
index 94dbd51df58f..8bce788f9ca9 100644
--- a/arch/x86/include/asm/page_32.h
+++ b/arch/x86/include/asm/page_32.h
@@ -6,6 +6,7 @@
 
 #ifndef __ASSEMBLY__
 
+#define direct_mapping_size 0
 #define __phys_addr_nodebug(x)	((x) - PAGE_OFFSET)
 #ifdef CONFIG_DEBUG_VIRTUAL
 extern unsigned long __phys_addr(unsigned long);
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index 939b1cff4a7b..f57fc3cc2246 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -14,6 +14,8 @@ extern unsigned long phys_base;
 extern unsigned long page_offset_base;
 extern unsigned long vmalloc_base;
 extern unsigned long vmemmap_base;
+extern unsigned long direct_mapping_size;
+extern unsigned long direct_mapping_mask;
 
 static inline unsigned long __phys_addr_nodebug(unsigned long x)
 {
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index ed8ec011a9fd..d2861074cf83 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -62,6 +62,12 @@ extern void x86_ce4100_early_setup(void);
 static inline void x86_ce4100_early_setup(void) { }
 #endif
 
+#ifdef CONFIG_MEMORY_PHYSICAL_PADDING
+void calculate_direct_mapping_size(void);
+#else
+static inline void calculate_direct_mapping_size(void) { }
+#endif
+
 #ifndef _SETUP
 
 #include <asm/espfix.h>
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 29ffa495bd1c..006d3ff46afe 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -60,6 +60,10 @@ EXPORT_SYMBOL(vmalloc_base);
 unsigned long vmemmap_base __ro_after_init = __VMEMMAP_BASE_L4;
 EXPORT_SYMBOL(vmemmap_base);
 #endif
+unsigned long direct_mapping_size __ro_after_init = -1UL;
+EXPORT_SYMBOL(direct_mapping_size);
+unsigned long direct_mapping_mask __ro_after_init = -1UL;
+EXPORT_SYMBOL(direct_mapping_mask);
 
 #define __head	__section(.head.text)
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index bbe35bf879f5..d12431e20876 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1077,6 +1077,9 @@ void __init setup_arch(char **cmdline_p)
 	 */
 	init_cache_modes();
 
+	 /* direct_mapping_size has to be initialized before KASLR and MKTME */
+	calculate_direct_mapping_size();
+
 	/*
 	 * Define random base addresses for memory sections after max_pfn is
 	 * defined and before each memory section base is used.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index a6b5c653727b..4c1f93df47a5 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1440,6 +1440,64 @@ unsigned long memory_block_size_bytes(void)
 	return memory_block_size_probed;
 }
 
+#ifdef CONFIG_MEMORY_PHYSICAL_PADDING
+void __init calculate_direct_mapping_size(void)
+{
+	unsigned long available_va;
+
+	/* 1/4 of virtual address space is didicated for direct mapping */
+	available_va = 1UL << (__VIRTUAL_MASK_SHIFT - 1);
+
+	/* How much memory the system has? */
+	direct_mapping_size = max_pfn << PAGE_SHIFT;
+	direct_mapping_size = round_up(direct_mapping_size, 1UL << 40);
+
+	if (!mktme_nr_keyids())
+		goto out;
+
+	/*
+	 * For MKTME we need direct_mapping_size to be power-of-2.
+	 * It makes __pa() implementation efficient.
+	 */
+	direct_mapping_size = roundup_pow_of_two(direct_mapping_size);
+
+	/*
+	 * Not enough virtual address space to address all physical memory with
+	 * MKTME enabled. Even without padding.
+	 *
+	 * Disable MKTME instead.
+	 */
+	if (direct_mapping_size > available_va / (mktme_nr_keyids() + 1)) {
+		pr_err("x86/mktme: Disabled. Not enough virtual address space\n");
+		pr_err("x86/mktme: Consider switching to 5-level paging\n");
+		mktme_disable();
+		goto out;
+	}
+
+	/*
+	 * Virtual address space is divided between per-KeyID direct mappings.
+	 */
+	available_va /= mktme_nr_keyids() + 1;
+out:
+	/* Add padding, if there's enough virtual address space */
+	direct_mapping_size += (1UL << 40) * CONFIG_MEMORY_PHYSICAL_PADDING;
+	if (mktme_nr_keyids())
+		direct_mapping_size = roundup_pow_of_two(direct_mapping_size);
+
+	if (direct_mapping_size > available_va)
+		direct_mapping_size = available_va;
+
+	/*
+	 * For MKTME, make sure direct_mapping_size is still power-of-2
+	 * after adding padding and calculate mask that is used in __pa().
+	 */
+	if (mktme_nr_keyids()) {
+		direct_mapping_size = rounddown_pow_of_two(direct_mapping_size);
+		direct_mapping_mask = direct_mapping_size - 1;
+	}
+}
+#endif
+
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 /*
  * Initialise the sparsemem vmemmap using huge-pages at the PMD level.
diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 580b82c2621b..83af41d289ed 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -103,10 +103,15 @@ void __init kernel_randomize_memory(void)
 	 * add padding if needed (especially for memory hotplug support).
 	 */
 	BUG_ON(kaslr_regions[0].base != &page_offset_base);
-	memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) +
-		CONFIG_MEMORY_PHYSICAL_PADDING;
 
-	/* Adapt phyiscal memory region size based on available memory */
+	/*
+	 * Calculate space required to map all physical memory.
+	 * In case of MKTME, we map physical memory multiple times, one for
+	 * each KeyID. If MKTME is disabled mktme_nr_keyids() is 0.
+	 */
+	memory_tb = (direct_mapping_size * (mktme_nr_keyids() + 1)) >> TB_SHIFT;
+
+	/* Adapt physical memory region size based on available memory */
 	if (memory_tb < kaslr_regions[0].size_tb)
 		kaslr_regions[0].size_tb = memory_tb;
 
-- 
2.21.0


  parent reply	other threads:[~2019-07-31 15:10 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-31 15:07 [PATCHv2 00/59] Intel MKTME enabling Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 01/59] mm: Do no merge VMAs with different encryption KeyIDs Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 02/59] mm: Add helpers to setup zero page mappings Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 03/59] mm/ksm: Do not merge pages with different KeyIDs Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 04/59] mm/page_alloc: Unify alloc_hugepage_vma() Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 05/59] mm/page_alloc: Handle allocation for encrypted memory Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 06/59] mm/khugepaged: Handle encrypted pages Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 07/59] x86/mm: Mask out KeyID bits from page table entry pfn Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 08/59] x86/mm: Introduce helpers to read number, shift and mask of KeyIDs Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 09/59] x86/mm: Store bitmask of the encryption algorithms supported by MKTME Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 10/59] x86/mm: Preserve KeyID on pte_modify() and pgprot_modify() Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 11/59] x86/mm: Detect MKTME early Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 12/59] x86/mm: Add a helper to retrieve KeyID for a page Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 13/59] x86/mm: Add a helper to retrieve KeyID for a VMA Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 14/59] x86/mm: Add hooks to allocate and free encrypted pages Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 15/59] x86/mm: Map zero pages into encrypted mappings correctly Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 16/59] x86/mm: Rename CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 17/59] x86/mm: Allow to disable MKTME after enumeration Kirill A. Shutemov
2019-07-31 15:07 ` Kirill A. Shutemov [this message]
2019-07-31 15:07 ` [PATCHv2 19/59] x86/mm: Implement syncing per-KeyID direct mappings Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 20/59] x86/mm: Handle encrypted memory in page_to_virt() and __pa() Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 21/59] mm/page_ext: Export lookup_page_ext() symbol Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 22/59] mm/rmap: Clear vma->anon_vma on unlink_anon_vmas() Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 23/59] x86/pconfig: Set an activated algorithm in all MKTME commands Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 24/59] keys/mktme: Introduce a Kernel Key Service for MKTME Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 25/59] keys/mktme: Preparse the MKTME key payload Kirill A. Shutemov
2019-08-05 11:58   ` Ben Boeckel
2019-08-05 20:31     ` Alison Schofield
2019-08-13 13:06       ` Ben Boeckel
2019-07-31 15:07 ` [PATCHv2 26/59] keys/mktme: Instantiate MKTME keys Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 27/59] keys/mktme: Destroy " Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 28/59] keys/mktme: Move the MKTME payload into a cache aligned structure Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 29/59] keys/mktme: Set up PCONFIG programming targets for MKTME keys Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 30/59] keys/mktme: Program MKTME keys into the platform hardware Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 31/59] keys/mktme: Set up a percpu_ref_count for MKTME keys Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 32/59] keys/mktme: Clear the key programming from the MKTME hardware Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 33/59] keys/mktme: Require CAP_SYS_RESOURCE capability for MKTME keys Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 34/59] acpi: Remove __init from acpi table parsing functions Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 35/59] acpi/hmat: Determine existence of an ACPI HMAT Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 36/59] keys/mktme: Require ACPI HMAT to register the MKTME Key Service Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 37/59] acpi/hmat: Evaluate topology presented in ACPI HMAT for MKTME Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 38/59] keys/mktme: Do not allow key creation in unsafe topologies Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 39/59] keys/mktme: Support CPU hotplug for MKTME key service Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 40/59] keys/mktme: Block memory hotplug additions when MKTME is enabled Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 41/59] mm: Generalize the mprotect implementation to support extensions Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 42/59] syscall/x86: Wire up a system call for MKTME encryption keys Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 43/59] x86/mm: Set KeyIDs in encrypted VMAs for MKTME Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 44/59] mm: Add the encrypt_mprotect() system call " Kirill A. Shutemov
2019-07-31 15:07 ` [PATCHv2 45/59] x86/mm: Keep reference counts on hardware key usage " Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 46/59] mm: Restrict MKTME memory encryption to anonymous VMAs Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 47/59] kvm, x86, mmu: setup MKTME keyID to spte for given PFN Kirill A. Shutemov
2019-08-06 20:26   ` Lendacky, Thomas
2019-08-07 14:28     ` Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 48/59] iommu/vt-d: Support MKTME in DMA remapping Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 49/59] x86/mm: introduce common code for mem encryption Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 50/59] x86/mm: Use common code for DMA memory encryption Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 51/59] x86/mm: Disable MKTME on incompatible platform configurations Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 52/59] x86/mm: Disable MKTME if not all system memory supports encryption Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 53/59] x86: Introduce CONFIG_X86_INTEL_MKTME Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 54/59] x86/mktme: Overview of Multi-Key Total Memory Encryption Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 55/59] x86/mktme: Document the MKTME provided security mitigations Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 56/59] x86/mktme: Document the MKTME kernel configuration requirements Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 57/59] x86/mktme: Document the MKTME Key Service API Kirill A. Shutemov
2019-08-05 11:58   ` Ben Boeckel
2019-08-05 20:44     ` Alison Schofield
2019-08-13 13:07       ` Ben Boeckel
2019-07-31 15:08 ` [PATCHv2 58/59] x86/mktme: Document the MKTME API for anonymous memory encryption Kirill A. Shutemov
2019-07-31 15:08 ` [PATCHv2 59/59] x86/mktme: Demonstration program using the MKTME APIs Kirill A. Shutemov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190731150813.26289-19-kirill.shutemov@linux.intel.com \
    --to=kirill@shutemov.name \
    --cc=akpm@linux-foundation.org \
    --cc=alison.schofield@intel.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@intel.com \
    --cc=dhowells@redhat.com \
    --cc=hpa@zytor.com \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=kai.huang@linux.intel.com \
    --cc=keescook@chromium.org \
    --cc=keyrings@vger.kernel.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@amacapital.net \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).