All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] x86/mm: Fix some issues with using trampoline_pgd
@ 2021-09-29 14:54 Joerg Roedel
  2021-09-29 14:54 ` [PATCH v2 1/4] x86/realmode: Add comment for Global bit usage in trampline_pgd Joerg Roedel
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Joerg Roedel @ 2021-09-29 14:54 UTC (permalink / raw)
  To: x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Joerg Roedel, Mike Rapoport,
	Andrew Morton, Brijesh Singh, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Hi,

here are a couple of fixes and documentation improvements for the
kernels use of the trampoline_pgd. The first patch adds a comment to
document that the trampoline_pgd aliases kernel page-tables in the
user address range, establishing global TLB entries for these
addresses.

The next two patches add global TLB flushes when switching to and from
the trampoline_pgd. The last patch extends the trampoline_pgd to cover
the whole kernel address range. This is needed to make sure the stack
and the real_mode_header don't get unmapped when switching to the
trampoline_pgd.

Please review.

Thanks,

	Joerg

Joerg Roedel (4):
  x86/realmode: Add comment for Global bit usage in trampline_pgd
  x86/mm/64: Flush global TLB on AP bringup
  x86/mm: Flush global TLB when switching to trampoline page-table
  x86/64/mm: Map all kernel memory into trampoline_pgd

 arch/x86/include/asm/realmode.h |  1 +
 arch/x86/kernel/cpu/common.c    |  6 ++++++
 arch/x86/kernel/reboot.c        | 12 ++----------
 arch/x86/mm/init.c              |  5 +++++
 arch/x86/realmode/init.c        | 31 ++++++++++++++++++++++++++++++-
 5 files changed, 44 insertions(+), 11 deletions(-)


base-commit: 5816b3e6577eaa676ceb00a848f0fd65fe2adc29
-- 
2.33.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/4] x86/realmode: Add comment for Global bit usage in trampline_pgd
  2021-09-29 14:54 [PATCH v2 0/4] x86/mm: Fix some issues with using trampoline_pgd Joerg Roedel
@ 2021-09-29 14:54 ` Joerg Roedel
  2021-09-29 14:54 ` [PATCH v2 2/4] x86/mm/64: Flush global TLB on AP bringup Joerg Roedel
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2021-09-29 14:54 UTC (permalink / raw)
  To: x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Joerg Roedel, Mike Rapoport,
	Andrew Morton, Brijesh Singh, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Document the fact that using the trampoline_pgd will result in the
creation of global TLB entries in the user range of the address
space.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/mm/init.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 23a14d82e783..accd702d4253 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -714,6 +714,11 @@ static void __init memory_map_bottom_up(unsigned long map_start,
 static void __init init_trampoline(void)
 {
 #ifdef CONFIG_X86_64
+	/*
+	 * The code below will alias kernel page-tables in the user-range of the
+	 * address space, including the Global bit. So global TLB entries will
+	 * be created when using the trampoline page-table.
+	 */
 	if (!kaslr_memory_enabled())
 		trampoline_pgd_entry = init_top_pgt[pgd_index(__PAGE_OFFSET)];
 	else
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/4] x86/mm/64: Flush global TLB on AP bringup
  2021-09-29 14:54 [PATCH v2 0/4] x86/mm: Fix some issues with using trampoline_pgd Joerg Roedel
  2021-09-29 14:54 ` [PATCH v2 1/4] x86/realmode: Add comment for Global bit usage in trampline_pgd Joerg Roedel
@ 2021-09-29 14:54 ` Joerg Roedel
  2021-09-29 15:09   ` Dave Hansen
  2021-09-29 14:55 ` [PATCH v2 3/4] x86/mm: Flush global TLB when switching to trampoline page-table Joerg Roedel
  2021-09-29 14:55 ` [PATCH v2 4/4] x86/64/mm: Map all kernel memory into trampoline_pgd Joerg Roedel
  3 siblings, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2021-09-29 14:54 UTC (permalink / raw)
  To: x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Joerg Roedel, Mike Rapoport,
	Andrew Morton, Brijesh Singh, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

The AP bringup code uses the trampoline_pgd page-table, which
establishes global mappings in the user range of the address space.
Flush the global TLB entries after CR4 is setup for the AP to make sure
no stale entries remain in the TLB.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/cpu/common.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 0f8885949e8c..0f71ea2e5680 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -436,6 +436,12 @@ void cr4_init(void)
 
 	/* Initialize cr4 shadow for this CPU. */
 	this_cpu_write(cpu_tlbstate.cr4, cr4);
+
+	/*
+	 * Flush any global TLB entries that might be left from the
+	 * trampline_pgd.
+	 */
+	__flush_tlb_all();
 }
 
 /*
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/4] x86/mm: Flush global TLB when switching to trampoline page-table
  2021-09-29 14:54 [PATCH v2 0/4] x86/mm: Fix some issues with using trampoline_pgd Joerg Roedel
  2021-09-29 14:54 ` [PATCH v2 1/4] x86/realmode: Add comment for Global bit usage in trampline_pgd Joerg Roedel
  2021-09-29 14:54 ` [PATCH v2 2/4] x86/mm/64: Flush global TLB on AP bringup Joerg Roedel
@ 2021-09-29 14:55 ` Joerg Roedel
  2021-09-29 15:07   ` Dave Hansen
  2021-09-29 14:55 ` [PATCH v2 4/4] x86/64/mm: Map all kernel memory into trampoline_pgd Joerg Roedel
  3 siblings, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2021-09-29 14:55 UTC (permalink / raw)
  To: x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Joerg Roedel, Mike Rapoport,
	Andrew Morton, Brijesh Singh, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Move the switching code into a function so that it can be re-used and
add a global TLB flush. This makes sure that usage of memory which is
not mapped in the trampoline page-table is reliably caught.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/include/asm/realmode.h |  1 +
 arch/x86/kernel/reboot.c        | 12 ++----------
 arch/x86/realmode/init.c        | 19 +++++++++++++++++++
 3 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 5db5d083c873..331474b150f1 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -89,6 +89,7 @@ static inline void set_real_mode_mem(phys_addr_t mem)
 }
 
 void reserve_real_mode(void);
+void load_trampoline_pgtable(void);
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 0a40df66a40d..fa700b46588e 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -113,17 +113,9 @@ void __noreturn machine_real_restart(unsigned int type)
 	spin_unlock(&rtc_lock);
 
 	/*
-	 * Switch back to the initial page table.
+	 * Switch to the trampoline page table.
 	 */
-#ifdef CONFIG_X86_32
-	load_cr3(initial_page_table);
-#else
-	write_cr3(real_mode_header->trampoline_pgd);
-
-	/* Exiting long mode will fail if CR4.PCIDE is set. */
-	if (boot_cpu_has(X86_FEATURE_PCID))
-		cr4_clear_bits(X86_CR4_PCIDE);
-#endif
+	load_trampoline_pgtable();
 
 	/* Jump to the identity-mapped low memory code */
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 31b5856010cb..0cfe1046cec9 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -17,6 +17,25 @@ u32 *trampoline_cr4_features;
 /* Hold the pgd entry used on booting additional CPUs */
 pgd_t trampoline_pgd_entry;
 
+void load_trampoline_pgtable(void)
+{
+#ifdef CONFIG_X86_32
+	load_cr3(initial_page_table);
+#else
+	/* Exiting long mode will fail if CR4.PCIDE is set. */
+	if (boot_cpu_has(X86_FEATURE_PCID))
+		cr4_clear_bits(X86_CR4_PCIDE);
+
+	write_cr3(real_mode_header->trampoline_pgd);
+#endif
+
+	/*
+	 * Flush global TLB entries to catch any bugs where code running on the
+	 * trampoline_pgd uses memory not mapped into the trampoline page-table.
+	 */
+	__flush_tlb_all();
+}
+
 void __init reserve_real_mode(void)
 {
 	phys_addr_t mem;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/4] x86/64/mm: Map all kernel memory into trampoline_pgd
  2021-09-29 14:54 [PATCH v2 0/4] x86/mm: Fix some issues with using trampoline_pgd Joerg Roedel
                   ` (2 preceding siblings ...)
  2021-09-29 14:55 ` [PATCH v2 3/4] x86/mm: Flush global TLB when switching to trampoline page-table Joerg Roedel
@ 2021-09-29 14:55 ` Joerg Roedel
  2021-09-29 15:22   ` Dave Hansen
  3 siblings, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2021-09-29 14:55 UTC (permalink / raw)
  To: x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Joerg Roedel, Mike Rapoport,
	Andrew Morton, Brijesh Singh, linux-kernel, stable

From: Joerg Roedel <jroedel@suse.de>

The trampoline_pgd only maps the 0xfffffff000000000-0xffffffffffffffff
range of kernel memory (with 4-level paging). This range contains the
kernels text+data+bss mappings and the module mapping space, but not the
direct mapping and the vmalloc area.

This is enough to get an application processors out of real-mode, but
for code that switches back to real-mode the trampoline_pgd is missing
important parts of the address space. For example, consider this code
from arch/x86/kernel/reboot.c, function machine_real_restart() for a
64-bit kernel:

	#ifdef CONFIG_X86_32
		load_cr3(initial_page_table);
	#else
		write_cr3(real_mode_header->trampoline_pgd);

		/* Exiting long mode will fail if CR4.PCIDE is set. */
		if (boot_cpu_has(X86_FEATURE_PCID))
			cr4_clear_bits(X86_CR4_PCIDE);
	#endif

		/* Jump to the identity-mapped low memory code */
	#ifdef CONFIG_X86_32
		asm volatile("jmpl *%0" : :
			     "rm" (real_mode_header->machine_real_restart_asm),
			     "a" (type));
	#else
		asm volatile("ljmpl *%0" : :
			     "m" (real_mode_header->machine_real_restart_asm),
			     "D" (type));
	#endif

The code switches to the trampoline_pgd, which unmaps the direct mapping
and also the kernel stack. The call to cr4_clear_bits() will find no
stack and crash the machine. The real_mode_header pointer below points
into the direct mapping, and dereferencing it also causes a crash.

The reason this does not crash always is only that kernel mappings are
global and the CR3 switch does not flush those mappings. But if theses
mappings are not in the TLB already, the above code will crash before it
can jump to the real-mode stub.

Extend the trampoline_pgd to contain all kernel mappings to prevent
these crashes and to make code which runs on this page-table more
robust.

Cc: stable@vger.kernel.org
Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/realmode/init.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 0cfe1046cec9..792cb9ca9b29 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -91,6 +91,7 @@ static void __init setup_real_mode(void)
 #ifdef CONFIG_X86_64
 	u64 *trampoline_pgd;
 	u64 efer;
+	int i;
 #endif
 
 	base = (unsigned char *)real_mode_header;
@@ -147,8 +148,17 @@ static void __init setup_real_mode(void)
 	trampoline_header->flags = 0;
 
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
+
+	/*
+	 * Map all of kernel memory into the trampoline PGD so that it includes
+	 * the direct mapping and vmalloc space. This is needed to keep the
+	 * stack and real_mode_header mapped when switching to this page table.
+	 */
+	for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++)
+		trampoline_pgd[i] = init_top_pgt[i].pgd;
+
+	/* Map the real mode stub as virtual == physical */
 	trampoline_pgd[0] = trampoline_pgd_entry.pgd;
-	trampoline_pgd[511] = init_top_pgt[511].pgd;
 #endif
 
 	sme_sev_setup_real_mode(trampoline_header);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/4] x86/mm: Flush global TLB when switching to trampoline page-table
  2021-09-29 14:55 ` [PATCH v2 3/4] x86/mm: Flush global TLB when switching to trampoline page-table Joerg Roedel
@ 2021-09-29 15:07   ` Dave Hansen
  2021-10-01 12:37     ` Joerg Roedel
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Hansen @ 2021-09-29 15:07 UTC (permalink / raw)
  To: Joerg Roedel, x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Joerg Roedel, Mike Rapoport,
	Andrew Morton, Brijesh Singh, linux-kernel

On 9/29/21 7:55 AM, Joerg Roedel wrote:
> +	/*
> +	 * Flush global TLB entries to catch any bugs where code running on the
> +	 * trampoline_pgd uses memory not mapped into the trampoline page-table.
> +	 */
> +	__flush_tlb_all();
> +}

This comment took me a minute to parse.  How about a bit more info, like:

	/*
	 * The CR3 writes above may not flush global TLB entries.
	 * Stale, global entries from previous sets of page tables may
	 * still be present.  Flush those stale entries.
	 *
	 * This ensures that memory accessed while running with
	 * trampoline_pgd is *actually* mapped into trampoline_pgd.
	 */


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/4] x86/mm/64: Flush global TLB on AP bringup
  2021-09-29 14:54 ` [PATCH v2 2/4] x86/mm/64: Flush global TLB on AP bringup Joerg Roedel
@ 2021-09-29 15:09   ` Dave Hansen
  2021-09-30 13:52     ` Joerg Roedel
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Hansen @ 2021-09-29 15:09 UTC (permalink / raw)
  To: Joerg Roedel, x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Joerg Roedel, Mike Rapoport,
	Andrew Morton, Brijesh Singh, linux-kernel

On 9/29/21 7:54 AM, Joerg Roedel wrote:
> The AP bringup code uses the trampoline_pgd page-table, which
> establishes global mappings in the user range of the address space.
> Flush the global TLB entries after CR4 is setup for the AP to make sure
> no stale entries remain in the TLB.
...
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 0f8885949e8c..0f71ea2e5680 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -436,6 +436,12 @@ void cr4_init(void)
>  
>  	/* Initialize cr4 shadow for this CPU. */
>  	this_cpu_write(cpu_tlbstate.cr4, cr4);
> +
> +	/*
> +	 * Flush any global TLB entries that might be left from the
> +	 * trampline_pgd.
> +	 */
> +	__flush_tlb_all();
>  }

Is there a reason to do this flush here as opposed to doing it closer to
the CR3 write where we switch away from trampoline_pgd?  cr4_init()
seems like an odd place.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 4/4] x86/64/mm: Map all kernel memory into trampoline_pgd
  2021-09-29 14:55 ` [PATCH v2 4/4] x86/64/mm: Map all kernel memory into trampoline_pgd Joerg Roedel
@ 2021-09-29 15:22   ` Dave Hansen
  0 siblings, 0 replies; 10+ messages in thread
From: Dave Hansen @ 2021-09-29 15:22 UTC (permalink / raw)
  To: Joerg Roedel, x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Joerg Roedel, Mike Rapoport,
	Andrew Morton, Brijesh Singh, linux-kernel, stable

On 9/29/21 7:55 AM, Joerg Roedel wrote:
...
> The reason this does not crash always is only that kernel mappings are
> global and the CR3 switch does not flush those mappings. But if theses
> mappings are not in the TLB already, the above code will crash before it
> can jump to the real-mode stub.

This would have been nice to have in the cover letter.  The whole
purpose for this series wasn't totally apparent until I read this.

> diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
> index 0cfe1046cec9..792cb9ca9b29 100644
> --- a/arch/x86/realmode/init.c
> +++ b/arch/x86/realmode/init.c
> @@ -91,6 +91,7 @@ static void __init setup_real_mode(void)
>  #ifdef CONFIG_X86_64
>  	u64 *trampoline_pgd;
>  	u64 efer;
> +	int i;
>  #endif
>  
>  	base = (unsigned char *)real_mode_header;
> @@ -147,8 +148,17 @@ static void __init setup_real_mode(void)
>  	trampoline_header->flags = 0;
>  
>  	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
> +
> +	/*
> +	 * Map all of kernel memory into the trampoline PGD so that it includes
> +	 * the direct mapping and vmalloc space. This is needed to keep the
> +	 * stack and real_mode_header mapped when switching to this page table.
> +	 */

This comment's mention of the direct map and vmalloc() makes a lot of
sense in the context of this patch where you're adding them.  But, it
doesn't mention the pgd[511] stuff.

Maybe just make it more generic:

	Include the entirety of the kernel mapping into the trampoline
	PGD.  This way, all mappings present in the normal kernel page
	tables are usable while running on trampoline_pgd.


> +	for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++)
> +		trampoline_pgd[i] = init_top_pgt[i].pgd;
> +
> +	/* Map the real mode stub as virtual == physical */
>  	trampoline_pgd[0] = trampoline_pgd_entry.pgd;
> -	trampoline_pgd[511] = init_top_pgt[511].pgd;
>  #endif

Nit: can we preserve the order, please?

	/* Map the real mode stub as virtual == physical */
  	trampoline_pgd[0] = trampoline_pgd_entry.pgd;

	for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++)
		trampoline_pgd[i] = init_top_pgt[i].pgd;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/4] x86/mm/64: Flush global TLB on AP bringup
  2021-09-29 15:09   ` Dave Hansen
@ 2021-09-30 13:52     ` Joerg Roedel
  0 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2021-09-30 13:52 UTC (permalink / raw)
  To: Dave Hansen
  Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Dave Hansen, Andy Lutomirski, Peter Zijlstra, Joerg Roedel,
	Mike Rapoport, Andrew Morton, Brijesh Singh, linux-kernel

On Wed, Sep 29, 2021 at 08:09:38AM -0700, Dave Hansen wrote:
> On 9/29/21 7:54 AM, Joerg Roedel wrote:
>
> > +	__flush_tlb_all();
> >  }
> 
> Is there a reason to do this flush here as opposed to doing it closer to
> the CR3 write where we switch away from trampoline_pgd?  cr4_init()
> seems like an odd place.

Yeah, the reason is that global flushing is done by toggling CR4.PGE and
I didn't want to do that before CR4 is set up.

The CR3 switch away from the trampoline_pgd for secondary CPUs on x86-64
happens in head_64.S already. I will add some asm to do a global flush
there right after the CR3 switch. Secondary CPUs are already on kernel
virtual addresses at this point.


	Joerg

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/4] x86/mm: Flush global TLB when switching to trampoline page-table
  2021-09-29 15:07   ` Dave Hansen
@ 2021-10-01 12:37     ` Joerg Roedel
  0 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2021-10-01 12:37 UTC (permalink / raw)
  To: Dave Hansen
  Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Dave Hansen, Andy Lutomirski, Peter Zijlstra, Joerg Roedel,
	Mike Rapoport, Andrew Morton, Brijesh Singh, linux-kernel

On Wed, Sep 29, 2021 at 08:07:10AM -0700, Dave Hansen wrote:
> 	/*
> 	 * The CR3 writes above may not flush global TLB entries.
> 	 * Stale, global entries from previous sets of page tables may
> 	 * still be present.  Flush those stale entries.
> 	 *
> 	 * This ensures that memory accessed while running with
> 	 * trampoline_pgd is *actually* mapped into trampoline_pgd.
> 	 */

Yes, this is better. I replaced my comment with this one (only did some
minor rewording).

Thanks,

	Joerg

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-10-01 12:38 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-29 14:54 [PATCH v2 0/4] x86/mm: Fix some issues with using trampoline_pgd Joerg Roedel
2021-09-29 14:54 ` [PATCH v2 1/4] x86/realmode: Add comment for Global bit usage in trampline_pgd Joerg Roedel
2021-09-29 14:54 ` [PATCH v2 2/4] x86/mm/64: Flush global TLB on AP bringup Joerg Roedel
2021-09-29 15:09   ` Dave Hansen
2021-09-30 13:52     ` Joerg Roedel
2021-09-29 14:55 ` [PATCH v2 3/4] x86/mm: Flush global TLB when switching to trampoline page-table Joerg Roedel
2021-09-29 15:07   ` Dave Hansen
2021-10-01 12:37     ` Joerg Roedel
2021-09-29 14:55 ` [PATCH v2 4/4] x86/64/mm: Map all kernel memory into trampoline_pgd Joerg Roedel
2021-09-29 15:22   ` Dave Hansen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.