linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Junaid Shahid <junaids@google.com>
To: linux-kernel@vger.kernel.org
Cc: kvm@vger.kernel.org, pbonzini@redhat.com, jmattson@google.com,
	pjt@google.com, oweisse@google.com, alexandre.chartre@oracle.com,
	rppt@linux.ibm.com, dave.hansen@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, luto@kernel.org,
	linux-mm@kvack.org
Subject: [RFC PATCH 20/47] mm: asi: Support for locally non-sensitive vmalloc allocations
Date: Tue, 22 Feb 2022 21:21:56 -0800	[thread overview]
Message-ID: <20220223052223.1202152-21-junaids@google.com> (raw)
In-Reply-To: <20220223052223.1202152-1-junaids@google.com>

A new flag, VM_LOCAL_NONSENSITIVE is added to designate locally
non-sensitive vmalloc/vmap areas. When using the __vmalloc /
__vmalloc_node APIs, if the corresponding GFP flag is specified, the
VM flag is automatically added. When using the __vmalloc_node_range API,
either flag can be specified independently. The VM flag will only map
the vmalloc area as non-sensitive, while the GFP flag will only map the
underlying direct map area as non-sensitive.

When using the __vmalloc_node_range API, instead of VMALLOC_START/END,
VMALLOC_LOCAL_NONSENSITIVE_START/END should be used. This is the range
that will have different ASI page tables for each process, thereby
providing the local mapping.

A command line parameter vmalloc_local_nonsensitive_percent is added to
specify the approximate division between the per-process and global
vmalloc ranges. Note that regular/sensitive vmalloc/vmap allocations
are not restricted by this division and can go anywhere in the entire
vmalloc range. The division only applies to non-sensitive allocations.

Since no attempt is made to balance regular/sensitive allocations across
the division, it is possible that one of these ranges gets filled up
by regular allocations, leaving no room for the non-sensitive
allocations for which that range was designated. But since the vmalloc
range is fairly large, so hopefully that will not be a problem in
practice. If that assumption turns out to be incorrect, we could
implement a more sophisticated scheme.

Signed-off-by: Junaid Shahid <junaids@google.com>


---
 arch/x86/include/asm/asi.h              |  2 +
 arch/x86/include/asm/page_64.h          |  2 +
 arch/x86/include/asm/pgtable_64_types.h |  7 ++-
 arch/x86/mm/asi.c                       | 57 ++++++++++++++++++
 include/asm-generic/asi.h               |  5 ++
 include/linux/vmalloc.h                 |  6 ++
 mm/vmalloc.c                            | 78 ++++++++++++++++++++-----
 7 files changed, 142 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h
index f11010c0334b..e3cbf6d8801e 100644
--- a/arch/x86/include/asm/asi.h
+++ b/arch/x86/include/asm/asi.h
@@ -46,6 +46,8 @@ DECLARE_PER_CPU_ALIGNED(struct asi_state, asi_cpu_state);
 
 extern pgd_t asi_global_nonsensitive_pgd[];
 
+void asi_vmalloc_init(void);
+
 int  asi_init_mm_state(struct mm_struct *mm);
 void asi_free_mm_state(struct mm_struct *mm);
 
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index 2845eca02552..b17574349572 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -18,6 +18,8 @@ extern unsigned long vmemmap_base;
 
 #ifdef CONFIG_ADDRESS_SPACE_ISOLATION
 
+extern unsigned long vmalloc_global_nonsensitive_start;
+extern unsigned long vmalloc_local_nonsensitive_end;
 extern unsigned long asi_local_map_base;
 DECLARE_STATIC_KEY_FALSE(asi_local_map_initialized);
 
diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 0fc380ba25b8..06793f7ef1aa 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -142,8 +142,13 @@ extern unsigned int ptrs_per_p4d;
 #define VMALLOC_END		(VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1)
 
 #ifdef CONFIG_ADDRESS_SPACE_ISOLATION
-#define VMALLOC_GLOBAL_NONSENSITIVE_START	VMALLOC_START
+
+#define VMALLOC_LOCAL_NONSENSITIVE_START	VMALLOC_START
+#define VMALLOC_LOCAL_NONSENSITIVE_END		vmalloc_local_nonsensitive_end
+
+#define VMALLOC_GLOBAL_NONSENSITIVE_START	vmalloc_global_nonsensitive_start
 #define VMALLOC_GLOBAL_NONSENSITIVE_END		VMALLOC_END
+
 #endif
 
 #define MODULES_VADDR		(__START_KERNEL_map + KERNEL_IMAGE_SIZE)
diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c
index 3ba0971a318d..91e5ff1224ff 100644
--- a/arch/x86/mm/asi.c
+++ b/arch/x86/mm/asi.c
@@ -3,6 +3,7 @@
 #include <linux/init.h>
 #include <linux/memblock.h>
 #include <linux/memcontrol.h>
+#include <linux/moduleparam.h>
 
 #include <asm/asi.h>
 #include <asm/pgalloc.h>
@@ -28,6 +29,17 @@ EXPORT_SYMBOL(asi_local_map_initialized);
 unsigned long asi_local_map_base __ro_after_init;
 EXPORT_SYMBOL(asi_local_map_base);
 
+unsigned long vmalloc_global_nonsensitive_start __ro_after_init;
+EXPORT_SYMBOL(vmalloc_global_nonsensitive_start);
+
+unsigned long vmalloc_local_nonsensitive_end __ro_after_init;
+EXPORT_SYMBOL(vmalloc_local_nonsensitive_end);
+
+/* Approximate percent only. Rounded to PGDIR_SIZE boundary. */
+static uint vmalloc_local_nonsensitive_percent __ro_after_init = 50;
+core_param(vmalloc_local_nonsensitive_percent,
+	   vmalloc_local_nonsensitive_percent, uint, 0444);
+
 int asi_register_class(const char *name, uint flags,
 		       const struct asi_hooks *ops)
 {
@@ -307,6 +319,10 @@ int asi_init(struct mm_struct *mm, int asi_index, struct asi **out_asi)
 		     i++)
 			set_pgd(asi->pgd + i, mm->asi[0].pgd[i]);
 
+		for (i = pgd_index(VMALLOC_LOCAL_NONSENSITIVE_START);
+		     i <= pgd_index(VMALLOC_LOCAL_NONSENSITIVE_END); i++)
+			set_pgd(asi->pgd + i, mm->asi[0].pgd[i]);
+
 		for (i = pgd_index(VMALLOC_GLOBAL_NONSENSITIVE_START);
 		     i < PTRS_PER_PGD; i++)
 			set_pgd(asi->pgd + i, asi_global_nonsensitive_pgd[i]);
@@ -432,6 +448,10 @@ void asi_free_mm_state(struct mm_struct *mm)
 			   pgd_index(ASI_LOCAL_MAP +
 				     PFN_PHYS(max_possible_pfn)) + 1);
 
+	asi_free_pgd_range(&mm->asi[0],
+			   pgd_index(VMALLOC_LOCAL_NONSENSITIVE_START),
+			   pgd_index(VMALLOC_LOCAL_NONSENSITIVE_END) + 1);
+
 	free_page((ulong)mm->asi[0].pgd);
 }
 
@@ -671,3 +691,40 @@ void asi_sync_mapping(struct asi *asi, void *start, size_t len)
 		for (; addr < end; addr = pgd_addr_end(addr, end))
 			asi_clone_pgd(asi->pgd, asi->mm->asi[0].pgd, addr);
 }
+
+void __init asi_vmalloc_init(void)
+{
+	uint start_index = pgd_index(VMALLOC_START);
+	uint end_index = pgd_index(VMALLOC_END);
+	uint global_start_index;
+
+	if (!boot_cpu_has(X86_FEATURE_ASI)) {
+		vmalloc_global_nonsensitive_start = VMALLOC_START;
+		vmalloc_local_nonsensitive_end = VMALLOC_END;
+		return;
+	}
+
+	if (vmalloc_local_nonsensitive_percent == 0) {
+		vmalloc_local_nonsensitive_percent = 1;
+		pr_warn("vmalloc_local_nonsensitive_percent must be non-zero");
+	}
+
+	if (vmalloc_local_nonsensitive_percent >= 100) {
+		vmalloc_local_nonsensitive_percent = 99;
+		pr_warn("vmalloc_local_nonsensitive_percent must be less than 100");
+	}
+
+	global_start_index = start_index + (end_index - start_index) *
+			     vmalloc_local_nonsensitive_percent / 100;
+	global_start_index = max(global_start_index, start_index + 1);
+
+	vmalloc_global_nonsensitive_start = -(PTRS_PER_PGD - global_start_index)
+					    * PGDIR_SIZE;
+	vmalloc_local_nonsensitive_end = vmalloc_global_nonsensitive_start - 1;
+
+	pr_debug("vmalloc_global_nonsensitive_start = %llx",
+		 vmalloc_global_nonsensitive_start);
+
+	VM_BUG_ON(vmalloc_local_nonsensitive_end >= VMALLOC_END);
+	VM_BUG_ON(vmalloc_global_nonsensitive_start <= VMALLOC_START);
+}
diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h
index a1c8ebff70e8..7c50d8b64fa4 100644
--- a/include/asm-generic/asi.h
+++ b/include/asm-generic/asi.h
@@ -18,6 +18,9 @@
 #define VMALLOC_GLOBAL_NONSENSITIVE_START	VMALLOC_START
 #define VMALLOC_GLOBAL_NONSENSITIVE_END		VMALLOC_END
 
+#define VMALLOC_LOCAL_NONSENSITIVE_START	VMALLOC_START
+#define VMALLOC_LOCAL_NONSENSITIVE_END		VMALLOC_END
+
 #ifndef _ASSEMBLY_
 
 struct asi_hooks {};
@@ -36,6 +39,8 @@ static inline int asi_init_mm_state(struct mm_struct *mm) { return 0; }
 
 static inline void asi_free_mm_state(struct mm_struct *mm) { }
 
+static inline void asi_vmalloc_init(void) { }
+
 static inline
 int asi_init(struct mm_struct *mm, int asi_index, struct asi **out_asi)
 {
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 5f85690f27b6..2b4eafc21fa5 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -41,8 +41,10 @@ struct notifier_block;		/* in notifier.h */
 
 #ifdef CONFIG_ADDRESS_SPACE_ISOLATION
 #define VM_GLOBAL_NONSENSITIVE	0x00000800	/* Similar to __GFP_GLOBAL_NONSENSITIVE */
+#define VM_LOCAL_NONSENSITIVE	0x00001000	/* Similar to __GFP_LOCAL_NONSENSITIVE */
 #else
 #define VM_GLOBAL_NONSENSITIVE	0
+#define VM_LOCAL_NONSENSITIVE	0
 #endif
 
 /* bits [20..32] reserved for arch specific ioremap internals */
@@ -67,6 +69,10 @@ struct vm_struct {
 	unsigned int		nr_pages;
 	phys_addr_t		phys_addr;
 	const void		*caller;
+#ifdef CONFIG_ADDRESS_SPACE_ISOLATION
+	/* Valid if flags contain VM_*_NONSENSITIVE */
+	struct asi		*asi;
+#endif
 };
 
 struct vmap_area {
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f13bfe7e896b..ea94d8a1e2e9 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2391,18 +2391,25 @@ void __init vmalloc_init(void)
 	 */
 	vmap_init_free_space();
 	vmap_initialized = true;
+
+	asi_vmalloc_init();
 }
 
+#ifdef CONFIG_ADDRESS_SPACE_ISOLATION
+
 static int asi_map_vm_area(struct vm_struct *area)
 {
 	if (!static_asi_enabled())
 		return 0;
 
 	if (area->flags & VM_GLOBAL_NONSENSITIVE)
-		return asi_map(ASI_GLOBAL_NONSENSITIVE, area->addr,
-			       get_vm_area_size(area));
+		area->asi = ASI_GLOBAL_NONSENSITIVE;
+	else if (area->flags & VM_LOCAL_NONSENSITIVE)
+		area->asi = ASI_LOCAL_NONSENSITIVE;
+	else
+		return 0;
 
-	return 0;
+	return asi_map(area->asi, area->addr, get_vm_area_size(area));
 }
 
 static void asi_unmap_vm_area(struct vm_struct *area)
@@ -2415,11 +2422,17 @@ static void asi_unmap_vm_area(struct vm_struct *area)
 	 * the case when the existing flush from try_purge_vmap_area_lazy()
 	 * and/or vm_unmap_aliases() happens non-lazily.
 	 */
-	if (area->flags & VM_GLOBAL_NONSENSITIVE)
-		asi_unmap(ASI_GLOBAL_NONSENSITIVE, area->addr,
-			  get_vm_area_size(area), true);
+	if (area->flags & (VM_GLOBAL_NONSENSITIVE | VM_LOCAL_NONSENSITIVE))
+		asi_unmap(area->asi, area->addr, get_vm_area_size(area), true);
 }
 
+#else
+
+static inline int asi_map_vm_area(struct vm_struct *area) { return 0; }
+static inline void asi_unmap_vm_area(struct vm_struct *area) { }
+
+#endif
+
 static inline void setup_vmalloc_vm_locked(struct vm_struct *vm,
 	struct vmap_area *va, unsigned long flags, const void *caller)
 {
@@ -2463,6 +2476,15 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (unlikely(!size))
 		return NULL;
 
+	if (static_asi_enabled()) {
+		VM_BUG_ON((flags & VM_LOCAL_NONSENSITIVE) &&
+			  !(start >= VMALLOC_LOCAL_NONSENSITIVE_START &&
+			    end <= VMALLOC_LOCAL_NONSENSITIVE_END));
+
+		VM_BUG_ON((flags & VM_GLOBAL_NONSENSITIVE) &&
+			  start < VMALLOC_GLOBAL_NONSENSITIVE_START);
+	}
+
 	if (flags & VM_IOREMAP)
 		align = 1ul << clamp_t(int, get_count_order_long(size),
 				       PAGE_SHIFT, IOREMAP_MAX_ORDER);
@@ -3073,8 +3095,22 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (WARN_ON_ONCE(!size))
 		return NULL;
 
-	if (static_asi_enabled() && (vm_flags & VM_GLOBAL_NONSENSITIVE))
-		gfp_mask |= __GFP_ZERO;
+	if (static_asi_enabled()) {
+		VM_BUG_ON((vm_flags & (VM_LOCAL_NONSENSITIVE |
+				       VM_GLOBAL_NONSENSITIVE)) ==
+			  (VM_LOCAL_NONSENSITIVE | VM_GLOBAL_NONSENSITIVE));
+
+		if ((vm_flags & VM_LOCAL_NONSENSITIVE) &&
+		    !mm_asi_enabled(current->mm)) {
+			vm_flags &= ~VM_LOCAL_NONSENSITIVE;
+
+			if (end == VMALLOC_LOCAL_NONSENSITIVE_END)
+				end = VMALLOC_END;
+		}
+
+		if (vm_flags & (VM_GLOBAL_NONSENSITIVE | VM_LOCAL_NONSENSITIVE))
+			gfp_mask |= __GFP_ZERO;
+	}
 
 	if ((size >> PAGE_SHIFT) > totalram_pages()) {
 		warn_alloc(gfp_mask, NULL,
@@ -3166,11 +3202,19 @@ void *__vmalloc_node(unsigned long size, unsigned long align,
 			    gfp_t gfp_mask, int node, const void *caller)
 {
 	ulong vm_flags = 0;
+	ulong start = VMALLOC_START, end = VMALLOC_END;
 
-	if (static_asi_enabled() && (gfp_mask & __GFP_GLOBAL_NONSENSITIVE))
-		vm_flags |= VM_GLOBAL_NONSENSITIVE;
+	if (static_asi_enabled()) {
+		if (gfp_mask & __GFP_GLOBAL_NONSENSITIVE) {
+			vm_flags |= VM_GLOBAL_NONSENSITIVE;
+			start = VMALLOC_GLOBAL_NONSENSITIVE_START;
+		} else if (gfp_mask & __GFP_LOCAL_NONSENSITIVE) {
+			vm_flags |= VM_LOCAL_NONSENSITIVE;
+			end = VMALLOC_LOCAL_NONSENSITIVE_END;
+		}
+	}
 
-	return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
+	return __vmalloc_node_range(size, align, start, end,
 				gfp_mask, PAGE_KERNEL, vm_flags, node, caller);
 }
 /*
@@ -3678,9 +3722,15 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
 	/* verify parameters and allocate data structures */
 	BUG_ON(offset_in_page(align) || !is_power_of_2(align));
 
-	if (static_asi_enabled() && (flags & VM_GLOBAL_NONSENSITIVE)) {
-		vmalloc_start = VMALLOC_GLOBAL_NONSENSITIVE_START;
-		vmalloc_end = VMALLOC_GLOBAL_NONSENSITIVE_END;
+	if (static_asi_enabled()) {
+		VM_BUG_ON((flags & (VM_LOCAL_NONSENSITIVE |
+				    VM_GLOBAL_NONSENSITIVE)) ==
+			  (VM_LOCAL_NONSENSITIVE | VM_GLOBAL_NONSENSITIVE));
+
+		if (flags & VM_GLOBAL_NONSENSITIVE)
+			vmalloc_start = VMALLOC_GLOBAL_NONSENSITIVE_START;
+		else if (flags & VM_LOCAL_NONSENSITIVE)
+			vmalloc_end = VMALLOC_LOCAL_NONSENSITIVE_END;
 	}
 
 	vmalloc_start = ALIGN(vmalloc_start, align);
-- 
2.35.1.473.g83b2b277ed-goog


  parent reply	other threads:[~2022-02-23  5:26 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-23  5:21 [RFC PATCH 00/47] Address Space Isolation for KVM Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 01/47] mm: asi: Introduce ASI core API Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 02/47] mm: asi: Add command-line parameter to enable/disable ASI Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 03/47] mm: asi: Switch to unrestricted address space when entering scheduler Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 04/47] mm: asi: ASI support in interrupts/exceptions Junaid Shahid
2022-03-14 15:50   ` Thomas Gleixner
2022-03-15  2:01     ` Junaid Shahid
2022-03-15 12:55       ` Thomas Gleixner
2022-03-15 22:41         ` Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 05/47] mm: asi: Make __get_current_cr3_fast() ASI-aware Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 06/47] mm: asi: ASI page table allocation and free functions Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 07/47] mm: asi: Functions to map/unmap a memory range into ASI page tables Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 08/47] mm: asi: Add basic infrastructure for global non-sensitive mappings Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 09/47] mm: Add __PAGEFLAG_FALSE Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 10/47] mm: asi: Support for global non-sensitive direct map allocations Junaid Shahid
2022-03-23 21:06   ` Matthew Wilcox
2022-03-23 23:48     ` Junaid Shahid
2022-03-24  1:54       ` Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 11/47] mm: asi: Global non-sensitive vmalloc/vmap support Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 12/47] mm: asi: Support for global non-sensitive slab caches Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 13/47] asi: Added ASI memory cgroup flag Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 14/47] mm: asi: Disable ASI API when ASI is not enabled for a process Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 15/47] kvm: asi: Restricted address space for VM execution Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 16/47] mm: asi: Support for mapping non-sensitive pcpu chunks Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 17/47] mm: asi: Aliased direct map for local non-sensitive allocations Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 18/47] mm: asi: Support for pre-ASI-init " Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 19/47] mm: asi: Support for locally nonsensitive page allocations Junaid Shahid
2022-02-23  5:21 ` Junaid Shahid [this message]
2022-02-23  5:21 ` [RFC PATCH 21/47] mm: asi: Add support for locally non-sensitive VM_USERMAP pages Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 22/47] mm: asi: Added refcounting when initilizing an asi Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 23/47] mm: asi: Add support for mapping all userspace memory into ASI Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 24/47] mm: asi: Support for local non-sensitive slab caches Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 25/47] mm: asi: Avoid warning from NMI userspace accesses in ASI context Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 26/47] mm: asi: Use separate PCIDs for restricted address spaces Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 27/47] mm: asi: Avoid TLB flushes during ASI CR3 switches when possible Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 28/47] mm: asi: Avoid TLB flush IPIs to CPUs not in ASI context Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 29/47] mm: asi: Reduce TLB flushes when freeing pages asynchronously Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 30/47] mm: asi: Add API for mapping userspace address ranges Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 31/47] mm: asi: Support for non-sensitive SLUB caches Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 32/47] x86: asi: Allocate FPU state separately when ASI is enabled Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 33/47] kvm: asi: Map guest memory into restricted ASI address space Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 34/47] kvm: asi: Unmap guest memory from ASI address space when using nested virt Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 35/47] mm: asi: asi_exit() on PF, skip handling if address is accessible Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 36/47] mm: asi: Adding support for dynamic percpu ASI allocations Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 37/47] mm: asi: ASI annotation support for static variables Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 38/47] mm: asi: ASI annotation support for dynamic modules Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 39/47] mm: asi: Skip conventional L1TF/MDS mitigations Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 40/47] mm: asi: support for static percpu DEFINE_PER_CPU*_ASI Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 41/47] mm: asi: Annotation of static variables to be nonsensitive Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 42/47] mm: asi: Annotation of PERCPU " Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 43/47] mm: asi: Annotation of dynamic " Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 44/47] kvm: asi: Splitting kvm_vcpu_arch into non/sensitive parts Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 45/47] mm: asi: Mapping global nonsensitive areas in asi_global_init Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 46/47] kvm: asi: Do asi_exit() in vcpu_run loop before returning to userspace Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 47/47] mm: asi: Properly un/mapping task stack from ASI + tlb flush Junaid Shahid
2022-03-05  3:39 ` [RFC PATCH 00/47] Address Space Isolation for KVM Hyeonggon Yoo
2022-03-16 21:34 ` Alexandre Chartre
2022-03-17 23:25   ` Junaid Shahid
2022-03-22  9:46     ` Alexandre Chartre
2022-03-23 19:35       ` Junaid Shahid
2022-04-08  8:52         ` Alexandre Chartre
2022-04-11  3:26           ` junaid_shahid
2022-03-16 22:49 ` Thomas Gleixner
2022-03-17 21:24   ` Junaid Shahid

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220223052223.1202152-21-junaids@google.com \
    --to=junaids@google.com \
    --cc=alexandre.chartre@oracle.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=oweisse@google.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=rppt@linux.ibm.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).