linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/sev: Fully map the #VC exception stacks
@ 2021-10-01  4:40 Tom Lendacky
  2021-10-01  4:49 ` Tom Lendacky
  2021-10-01  8:57 ` Borislav Petkov
  0 siblings, 2 replies; 14+ messages in thread
From: Tom Lendacky @ 2021-10-01  4:40 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Joerg Roedel, Brijesh Singh

The size of the exception stacks was recently increased, resulting in
stack sizes greater than a page in size. The #VC exception handling was
only mapping the first (bottom) page, resulting in an SEV-ES guest failing
to boot.

Update setup_vc_stacks() to map all the pages of both the IST stack area
and the fallback stack area.

Fixes: 7fae4c24a2b8 ("x86: Increase exception stack sizes")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/sev.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a6895e440bc3..33e4704164cc 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -99,25 +99,33 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
 /* Needed in vc_early_forward_exception */
 void do_early_exception(struct pt_regs *regs, int trapnr);
 
+static void __init map_vc_stack(unsigned long bot, unsigned long top,
+				phys_addr_t pa)
+{
+	while (bot < top) {
+		cea_set_pte((void *)bot, pa, PAGE_KERNEL);
+		bot += PAGE_SIZE;
+		pa += PAGE_SIZE;
+	}
+}
+
 static void __init setup_vc_stacks(int cpu)
 {
 	struct sev_es_runtime_data *data;
 	struct cpu_entry_area *cea;
-	unsigned long vaddr;
-	phys_addr_t pa;
 
 	data = per_cpu(runtime_data, cpu);
 	cea  = get_cpu_entry_area(cpu);
 
 	/* Map #VC IST stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC);
-	pa    = __pa(data->ist_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
+	map_vc_stack(CEA_ESTACK_BOT(&cea->estacks, VC),
+		     CEA_ESTACK_TOP(&cea->estacks, VC),
+		     __pa(data->ist_stack));
 
 	/* Map VC fall-back stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC2);
-	pa    = __pa(data->fallback_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
+	map_vc_stack(CEA_ESTACK_BOT(&cea->estacks, VC2),
+		     CEA_ESTACK_TOP(&cea->estacks, VC2),
+		     __pa(data->fallback_stack));
 }
 
 static __always_inline bool on_vc_stack(struct pt_regs *regs)
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH] x86/sev: Fully map the #VC exception stacks
  2021-10-01  4:40 [PATCH] x86/sev: Fully map the #VC exception stacks Tom Lendacky
@ 2021-10-01  4:49 ` Tom Lendacky
  2021-10-01  8:57 ` Borislav Petkov
  1 sibling, 0 replies; 14+ messages in thread
From: Tom Lendacky @ 2021-10-01  4:49 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Joerg Roedel, Brijesh Singh

On 9/30/21 11:40 PM, Tom Lendacky wrote:
> The size of the exception stacks was recently increased, resulting in
> stack sizes greater than a page in size. The #VC exception handling was
> only mapping the first (bottom) page, resulting in an SEV-ES guest failing
> to boot.
> 
> Update setup_vc_stacks() to map all the pages of both the IST stack area
> and the fallback stack area.
> 
> Fixes: 7fae4c24a2b8 ("x86: Increase exception stack sizes")

Arguably the Fixes: tag may not be completely accurate, since it was an 
issue within setup_vc_stacks(). But, this is more for if someone was to 
pull the patch identified by the Fixes: tag, then they would definitely 
need this patch to run SEV-ES.

Thanks,
Tom

> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>   arch/x86/kernel/sev.c | 24 ++++++++++++++++--------
>   1 file changed, 16 insertions(+), 8 deletions(-)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] x86/sev: Fully map the #VC exception stacks
  2021-10-01  4:40 [PATCH] x86/sev: Fully map the #VC exception stacks Tom Lendacky
  2021-10-01  4:49 ` Tom Lendacky
@ 2021-10-01  8:57 ` Borislav Petkov
  2021-10-01 11:50   ` Joerg Roedel
  1 sibling, 1 reply; 14+ messages in thread
From: Borislav Petkov @ 2021-10-01  8:57 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Brijesh Singh

On Thu, Sep 30, 2021 at 11:40:50PM -0500, Tom Lendacky wrote:
> The size of the exception stacks was recently increased, resulting in
> stack sizes greater than a page in size. The #VC exception handling was
> only mapping the first (bottom) page, resulting in an SEV-ES guest failing
> to boot.
> 
> Update setup_vc_stacks() to map all the pages of both the IST stack area
> and the fallback stack area.
> 
> Fixes: 7fae4c24a2b8 ("x86: Increase exception stack sizes")
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/kernel/sev.c | 24 ++++++++++++++++--------
>  1 file changed, 16 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index a6895e440bc3..33e4704164cc 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -99,25 +99,33 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
>  /* Needed in vc_early_forward_exception */
>  void do_early_exception(struct pt_regs *regs, int trapnr);
>  
> +static void __init map_vc_stack(unsigned long bot, unsigned long top,
> +				phys_addr_t pa)
> +{
> +	while (bot < top) {
> +		cea_set_pte((void *)bot, pa, PAGE_KERNEL);
> +		bot += PAGE_SIZE;
> +		pa += PAGE_SIZE;
> +	}
> +}
> +
>  static void __init setup_vc_stacks(int cpu)
>  {
>  	struct sev_es_runtime_data *data;
>  	struct cpu_entry_area *cea;
> -	unsigned long vaddr;
> -	phys_addr_t pa;
>  
>  	data = per_cpu(runtime_data, cpu);
>  	cea  = get_cpu_entry_area(cpu);
>  
>  	/* Map #VC IST stack */
> -	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC);
> -	pa    = __pa(data->ist_stack);
> -	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
> +	map_vc_stack(CEA_ESTACK_BOT(&cea->estacks, VC),
> +		     CEA_ESTACK_TOP(&cea->estacks, VC),
> +		     __pa(data->ist_stack));

So this would not have broken if it would've used EXCEPTION_STKSZ or
EXCEPTION_STACK_ORDER rather since we're mapping pages.

Please use those defines so that this keeps working when someone mad
decides to increase those exception stack sizes again because everything
*and* the kitchen sink wants to instrument the damn kernel. Nothing to
see here people...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] x86/sev: Fully map the #VC exception stacks
  2021-10-01  8:57 ` Borislav Petkov
@ 2021-10-01 11:50   ` Joerg Roedel
  2021-10-01 12:58     ` Borislav Petkov
  0 siblings, 1 reply; 14+ messages in thread
From: Joerg Roedel @ 2021-10-01 11:50 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Tom Lendacky, linux-kernel, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Brijesh Singh

On Fri, Oct 01, 2021 at 10:57:57AM +0200, Borislav Petkov wrote:
> Please use those defines so that this keeps working when someone mad
> decides to increase those exception stack sizes again because everything
> *and* the kitchen sink wants to instrument the damn kernel. Nothing to
> see here people...

Yeah, I think the right fix is to export cea_map_percpu_pages() and move
the cea_map_stack() macro to a header and use it to map the VC stacks.

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] x86/sev: Fully map the #VC exception stacks
  2021-10-01 11:50   ` Joerg Roedel
@ 2021-10-01 12:58     ` Borislav Petkov
  2021-10-01 13:00       ` Joerg Roedel
  0 siblings, 1 reply; 14+ messages in thread
From: Borislav Petkov @ 2021-10-01 12:58 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Tom Lendacky, linux-kernel, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Brijesh Singh

On Fri, Oct 01, 2021 at 01:50:24PM +0200, Joerg Roedel wrote:
> Yeah, I think the right fix is to export cea_map_percpu_pages() and move
> the cea_map_stack() macro to a header and use it to map the VC stacks.

I'll do you one better: Put the #VC stack mapping into
percpu_setup_exception_stacks(), where it naturally belongs and where
the other stacks are being mapped instead of doing everything "by hand"
like now and exporting random helpers.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] x86/sev: Fully map the #VC exception stacks
  2021-10-01 12:58     ` Borislav Petkov
@ 2021-10-01 13:00       ` Joerg Roedel
  2021-10-01 13:29         ` Peter Zijlstra
  2021-10-01 13:52         ` Borislav Petkov
  0 siblings, 2 replies; 14+ messages in thread
From: Joerg Roedel @ 2021-10-01 13:00 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Tom Lendacky, linux-kernel, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Brijesh Singh

On Fri, Oct 01, 2021 at 02:58:28PM +0200, Borislav Petkov wrote:
> On Fri, Oct 01, 2021 at 01:50:24PM +0200, Joerg Roedel wrote:
> > Yeah, I think the right fix is to export cea_map_percpu_pages() and move
> > the cea_map_stack() macro to a header and use it to map the VC stacks.
> 
> I'll do you one better: Put the #VC stack mapping into
> percpu_setup_exception_stacks(), where it naturally belongs and where
> the other stacks are being mapped instead of doing everything "by hand"
> like now and exporting random helpers.

The VC stack is only allocated and mapped when SEV-ES is detected, so
they can't always be mapped by generic code.

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] x86/sev: Fully map the #VC exception stacks
  2021-10-01 13:00       ` Joerg Roedel
@ 2021-10-01 13:29         ` Peter Zijlstra
  2021-10-01 13:52         ` Borislav Petkov
  1 sibling, 0 replies; 14+ messages in thread
From: Peter Zijlstra @ 2021-10-01 13:29 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Borislav Petkov, Tom Lendacky, linux-kernel, x86,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Brijesh Singh

On Fri, Oct 01, 2021 at 03:00:38PM +0200, Joerg Roedel wrote:
> On Fri, Oct 01, 2021 at 02:58:28PM +0200, Borislav Petkov wrote:
> > On Fri, Oct 01, 2021 at 01:50:24PM +0200, Joerg Roedel wrote:
> > > Yeah, I think the right fix is to export cea_map_percpu_pages() and move
> > > the cea_map_stack() macro to a header and use it to map the VC stacks.
> > 
> > I'll do you one better: Put the #VC stack mapping into
> > percpu_setup_exception_stacks(), where it naturally belongs and where
> > the other stacks are being mapped instead of doing everything "by hand"
> > like now and exporting random helpers.
> 
> The VC stack is only allocated and mapped when SEV-ES is detected, so
> they can't always be mapped by generic code.

It's just a few pages per cpu, is that worth it? Why not have it
unconditinoally allocated in the right place when it finds the CPU is
SEV capable?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] x86/sev: Fully map the #VC exception stacks
  2021-10-01 13:00       ` Joerg Roedel
  2021-10-01 13:29         ` Peter Zijlstra
@ 2021-10-01 13:52         ` Borislav Petkov
  2021-10-01 20:39           ` Borislav Petkov
  1 sibling, 1 reply; 14+ messages in thread
From: Borislav Petkov @ 2021-10-01 13:52 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Tom Lendacky, linux-kernel, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Brijesh Singh

On Fri, Oct 01, 2021 at 03:00:38PM +0200, Joerg Roedel wrote:
> The VC stack is only allocated and mapped when SEV-ES is detected, so
> they can't always be mapped by generic code.

And? I am assuming you do know how to check whether SEV-ES is enabled.

:-)

I also assumed it is implicitly clear that the mapping should not happen
unconditionally but of course behind a check.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] x86/sev: Fully map the #VC exception stacks
  2021-10-01 13:52         ` Borislav Petkov
@ 2021-10-01 20:39           ` Borislav Petkov
  2021-10-04 15:08             ` [PATCH] x86/sev: Make the #VC exception stacks part of the default stacks storage Borislav Petkov
  0 siblings, 1 reply; 14+ messages in thread
From: Borislav Petkov @ 2021-10-01 20:39 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Tom Lendacky, linux-kernel, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Brijesh Singh

It doesn't get any more straight-forward than this.

We can ifdef around the ESTACKS_MEMBERS VC and VC2 arrays so that those
things do get allocated only on a CONFIG_AMD_MEM_ENCRYPT kernel so that
we don't waste 4 pages per CPU on machines which don't do SEV but meh.

Thoughts?

---
diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
index 3d52b094850a..13a3e8510c33 100644
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -21,9 +21,9 @@
 	char	MCE_stack_guard[guardsize];			\
 	char	MCE_stack[EXCEPTION_STKSZ];			\
 	char	VC_stack_guard[guardsize];			\
-	char	VC_stack[optional_stack_size];			\
+	char	VC_stack[EXCEPTION_STKSZ];			\
 	char	VC2_stack_guard[guardsize];			\
-	char	VC2_stack[optional_stack_size];			\
+	char	VC2_stack[EXCEPTION_STKSZ];			\
 	char	IST_top_guard[guardsize];			\
 
 /* The exception stacks' physical storage. No guard pages required */
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a6895e440bc3..88401675dabb 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -46,16 +46,6 @@ static struct ghcb __initdata *boot_ghcb;
 struct sev_es_runtime_data {
 	struct ghcb ghcb_page;
 
-	/* Physical storage for the per-CPU IST stack of the #VC handler */
-	char ist_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
-
-	/*
-	 * Physical storage for the per-CPU fall-back stack of the #VC handler.
-	 * The fall-back stack is used when it is not safe to switch back to the
-	 * interrupted stack in the #VC entry code.
-	 */
-	char fallback_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
-
 	/*
 	 * Reserve one page per CPU as backup storage for the unencrypted GHCB.
 	 * It is needed when an NMI happens while the #VC handler uses the real
@@ -99,27 +89,6 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
 /* Needed in vc_early_forward_exception */
 void do_early_exception(struct pt_regs *regs, int trapnr);
 
-static void __init setup_vc_stacks(int cpu)
-{
-	struct sev_es_runtime_data *data;
-	struct cpu_entry_area *cea;
-	unsigned long vaddr;
-	phys_addr_t pa;
-
-	data = per_cpu(runtime_data, cpu);
-	cea  = get_cpu_entry_area(cpu);
-
-	/* Map #VC IST stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC);
-	pa    = __pa(data->ist_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
-
-	/* Map VC fall-back stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC2);
-	pa    = __pa(data->fallback_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
-}
-
 static __always_inline bool on_vc_stack(struct pt_regs *regs)
 {
 	unsigned long sp = regs->sp;
@@ -787,7 +756,6 @@ void __init sev_es_init_vc_handling(void)
 	for_each_possible_cpu(cpu) {
 		alloc_runtime_data(cpu);
 		init_ghcb(cpu);
-		setup_vc_stacks(cpu);
 	}
 
 	sev_es_setup_play_dead();
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index f5e1e60c9095..82d062414f19 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -110,6 +110,13 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu)
 	cea_map_stack(NMI);
 	cea_map_stack(DB);
 	cea_map_stack(MCE);
+
+	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) {
+		if (sev_es_active()) {
+			cea_map_stack(VC);
+			cea_map_stack(VC2);
+		}
+	}
 }
 #else
 static inline void percpu_setup_exception_stacks(unsigned int cpu)

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH] x86/sev: Make the #VC exception stacks part of the default stacks storage
  2021-10-01 20:39           ` Borislav Petkov
@ 2021-10-04 15:08             ` Borislav Petkov
  2021-10-04 21:41               ` [PATCH -v2] " Borislav Petkov
  0 siblings, 1 reply; 14+ messages in thread
From: Borislav Petkov @ 2021-10-04 15:08 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Tom Lendacky, linux-kernel, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Brijesh Singh

---
From: Borislav Petkov <bp@suse.de>
Date: Fri, 1 Oct 2021 21:41:20 +0200

The size of the exception stacks was increased by the commit in Fixes,
resulting in stack sizes greater than a page in size. The #VC exception
handling was only mapping the first (bottom) page, resulting in an
SEV-ES guest failing to boot.

Make the #VC exception stacks part of the default exception stacks
storage and allocate them with a CONFIG_AMD_MEM_ENCRYPT=y .config. Map
them only when a SEV-ES guest has been detected.

Rip out the custom VC stacks mapping and storage code.

 [ bp: Steal and adapt Tom's commit message. ]

Fixes: 7fae4c24a2b8 ("x86: Increase exception stack sizes")
Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/include/asm/cpu_entry_area.h | 16 +++++++++-----
 arch/x86/kernel/sev.c                 | 32 ---------------------------
 arch/x86/mm/cpu_entry_area.c          |  7 ++++++
 3 files changed, 18 insertions(+), 37 deletions(-)

diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
index 3d52b094850a..2512e1f5ac02 100644
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -10,8 +10,14 @@
 
 #ifdef CONFIG_X86_64
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+#define VC_EXCEPTION_STKSZ	EXCEPTION_STKSZ
+#else
+#define VC_EXCEPTION_STKSZ	0
+#endif
+
 /* Macro to enforce the same ordering and stack sizes */
-#define ESTACKS_MEMBERS(guardsize, optional_stack_size)		\
+#define ESTACKS_MEMBERS(guardsize)				\
 	char	DF_stack_guard[guardsize];			\
 	char	DF_stack[EXCEPTION_STKSZ];			\
 	char	NMI_stack_guard[guardsize];			\
@@ -21,19 +27,19 @@
 	char	MCE_stack_guard[guardsize];			\
 	char	MCE_stack[EXCEPTION_STKSZ];			\
 	char	VC_stack_guard[guardsize];			\
-	char	VC_stack[optional_stack_size];			\
+	char	VC_stack[VC_EXCEPTION_STKSZ];			\
 	char	VC2_stack_guard[guardsize];			\
-	char	VC2_stack[optional_stack_size];			\
+	char	VC2_stack[VC_EXCEPTION_STKSZ];			\
 	char	IST_top_guard[guardsize];			\
 
 /* The exception stacks' physical storage. No guard pages required */
 struct exception_stacks {
-	ESTACKS_MEMBERS(0, 0)
+	ESTACKS_MEMBERS(0)
 };
 
 /* The effective cpu entry area mapping with guard pages. */
 struct cea_exception_stacks {
-	ESTACKS_MEMBERS(PAGE_SIZE, EXCEPTION_STKSZ)
+	ESTACKS_MEMBERS(PAGE_SIZE)
 };
 
 /*
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a6895e440bc3..88401675dabb 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -46,16 +46,6 @@ static struct ghcb __initdata *boot_ghcb;
 struct sev_es_runtime_data {
 	struct ghcb ghcb_page;
 
-	/* Physical storage for the per-CPU IST stack of the #VC handler */
-	char ist_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
-
-	/*
-	 * Physical storage for the per-CPU fall-back stack of the #VC handler.
-	 * The fall-back stack is used when it is not safe to switch back to the
-	 * interrupted stack in the #VC entry code.
-	 */
-	char fallback_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
-
 	/*
 	 * Reserve one page per CPU as backup storage for the unencrypted GHCB.
 	 * It is needed when an NMI happens while the #VC handler uses the real
@@ -99,27 +89,6 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
 /* Needed in vc_early_forward_exception */
 void do_early_exception(struct pt_regs *regs, int trapnr);
 
-static void __init setup_vc_stacks(int cpu)
-{
-	struct sev_es_runtime_data *data;
-	struct cpu_entry_area *cea;
-	unsigned long vaddr;
-	phys_addr_t pa;
-
-	data = per_cpu(runtime_data, cpu);
-	cea  = get_cpu_entry_area(cpu);
-
-	/* Map #VC IST stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC);
-	pa    = __pa(data->ist_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
-
-	/* Map VC fall-back stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC2);
-	pa    = __pa(data->fallback_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
-}
-
 static __always_inline bool on_vc_stack(struct pt_regs *regs)
 {
 	unsigned long sp = regs->sp;
@@ -787,7 +756,6 @@ void __init sev_es_init_vc_handling(void)
 	for_each_possible_cpu(cpu) {
 		alloc_runtime_data(cpu);
 		init_ghcb(cpu);
-		setup_vc_stacks(cpu);
 	}
 
 	sev_es_setup_play_dead();
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index f5e1e60c9095..82d062414f19 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -110,6 +110,13 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu)
 	cea_map_stack(NMI);
 	cea_map_stack(DB);
 	cea_map_stack(MCE);
+
+	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) {
+		if (sev_es_active()) {
+			cea_map_stack(VC);
+			cea_map_stack(VC2);
+		}
+	}
 }
 #else
 static inline void percpu_setup_exception_stacks(unsigned int cpu)
-- 
2.29.2


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH -v2] x86/sev: Make the #VC exception stacks part of the default stacks storage
  2021-10-04 15:08             ` [PATCH] x86/sev: Make the #VC exception stacks part of the default stacks storage Borislav Petkov
@ 2021-10-04 21:41               ` Borislav Petkov
  2021-10-05 16:28                 ` Tom Lendacky
                                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Borislav Petkov @ 2021-10-04 21:41 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Tom Lendacky, linux-kernel, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Brijesh Singh

Yap,

here's v2, now tested. It seems we do need that optional_stack_size
second arg to ESTACKS_MEMBERS(), thx Tom.

---
From: Borislav Petkov <bp@suse.de>

The size of the exception stacks was increased by the commit in Fixes,
resulting in stack sizes greater than a page in size. The #VC exception
handling was only mapping the first (bottom) page, resulting in an
SEV-ES guest failing to boot.

Make the #VC exception stacks part of the default exception stacks
storage and allocate them with a CONFIG_AMD_MEM_ENCRYPT=y .config. Map
them only when a SEV-ES guest has been detected.

Rip out the custom VC stacks mapping and storage code.

 [ bp: Steal and adapt Tom's commit message. ]

Fixes: 7fae4c24a2b8 ("x86: Increase exception stack sizes")
Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/include/asm/cpu_entry_area.h |  8 ++++++-
 arch/x86/kernel/sev.c                 | 32 ---------------------------
 arch/x86/mm/cpu_entry_area.c          |  7 ++++++
 3 files changed, 14 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
index 3d52b094850a..dd5ea1bdf04c 100644
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -10,6 +10,12 @@
 
 #ifdef CONFIG_X86_64
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+#define VC_EXCEPTION_STKSZ	EXCEPTION_STKSZ
+#else
+#define VC_EXCEPTION_STKSZ	0
+#endif
+
 /* Macro to enforce the same ordering and stack sizes */
 #define ESTACKS_MEMBERS(guardsize, optional_stack_size)		\
 	char	DF_stack_guard[guardsize];			\
@@ -28,7 +34,7 @@
 
 /* The exception stacks' physical storage. No guard pages required */
 struct exception_stacks {
-	ESTACKS_MEMBERS(0, 0)
+	ESTACKS_MEMBERS(0, VC_EXCEPTION_STKSZ)
 };
 
 /* The effective cpu entry area mapping with guard pages. */
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 53a6837d354b..4d0d1c2b65e1 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -46,16 +46,6 @@ static struct ghcb __initdata *boot_ghcb;
 struct sev_es_runtime_data {
 	struct ghcb ghcb_page;
 
-	/* Physical storage for the per-CPU IST stack of the #VC handler */
-	char ist_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
-
-	/*
-	 * Physical storage for the per-CPU fall-back stack of the #VC handler.
-	 * The fall-back stack is used when it is not safe to switch back to the
-	 * interrupted stack in the #VC entry code.
-	 */
-	char fallback_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
-
 	/*
 	 * Reserve one page per CPU as backup storage for the unencrypted GHCB.
 	 * It is needed when an NMI happens while the #VC handler uses the real
@@ -99,27 +89,6 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
 /* Needed in vc_early_forward_exception */
 void do_early_exception(struct pt_regs *regs, int trapnr);
 
-static void __init setup_vc_stacks(int cpu)
-{
-	struct sev_es_runtime_data *data;
-	struct cpu_entry_area *cea;
-	unsigned long vaddr;
-	phys_addr_t pa;
-
-	data = per_cpu(runtime_data, cpu);
-	cea  = get_cpu_entry_area(cpu);
-
-	/* Map #VC IST stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC);
-	pa    = __pa(data->ist_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
-
-	/* Map VC fall-back stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC2);
-	pa    = __pa(data->fallback_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
-}
-
 static __always_inline bool on_vc_stack(struct pt_regs *regs)
 {
 	unsigned long sp = regs->sp;
@@ -787,7 +756,6 @@ void __init sev_es_init_vc_handling(void)
 	for_each_possible_cpu(cpu) {
 		alloc_runtime_data(cpu);
 		init_ghcb(cpu);
-		setup_vc_stacks(cpu);
 	}
 
 	sev_es_setup_play_dead();
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index f5e1e60c9095..6c2f1b76a0b6 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -110,6 +110,13 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu)
 	cea_map_stack(NMI);
 	cea_map_stack(DB);
 	cea_map_stack(MCE);
+
+	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) {
+		if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
+			cea_map_stack(VC);
+			cea_map_stack(VC2);
+		}
+	}
 }
 #else
 static inline void percpu_setup_exception_stacks(unsigned int cpu)
-- 
2.29.2


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH -v2] x86/sev: Make the #VC exception stacks part of the default stacks storage
  2021-10-04 21:41               ` [PATCH -v2] " Borislav Petkov
@ 2021-10-05 16:28                 ` Tom Lendacky
  2021-10-05 20:32                 ` Brijesh Singh
  2021-10-06 19:56                 ` [tip: x86/core] " tip-bot2 for Borislav Petkov
  2 siblings, 0 replies; 14+ messages in thread
From: Tom Lendacky @ 2021-10-05 16:28 UTC (permalink / raw)
  To: Borislav Petkov, Joerg Roedel
  Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Brijesh Singh

On 10/4/21 4:41 PM, Borislav Petkov wrote:
> Yap,
> 
> here's v2, now tested. It seems we do need that optional_stack_size
> second arg to ESTACKS_MEMBERS(), thx Tom.
> 
> ---
> From: Borislav Petkov <bp@suse.de>

Tested-by: Tom Lendacky <thomas.lendacky@amd.com>

> 
> The size of the exception stacks was increased by the commit in Fixes,
> resulting in stack sizes greater than a page in size. The #VC exception
> handling was only mapping the first (bottom) page, resulting in an
> SEV-ES guest failing to boot.
> 
> Make the #VC exception stacks part of the default exception stacks
> storage and allocate them with a CONFIG_AMD_MEM_ENCRYPT=y .config. Map
> them only when a SEV-ES guest has been detected.
> 
> Rip out the custom VC stacks mapping and storage code.
> 
>   [ bp: Steal and adapt Tom's commit message. ]
> 
> Fixes: 7fae4c24a2b8 ("x86: Increase exception stack sizes")
> Signed-off-by: Borislav Petkov <bp@suse.de>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH -v2] x86/sev: Make the #VC exception stacks part of the default stacks storage
  2021-10-04 21:41               ` [PATCH -v2] " Borislav Petkov
  2021-10-05 16:28                 ` Tom Lendacky
@ 2021-10-05 20:32                 ` Brijesh Singh
  2021-10-06 19:56                 ` [tip: x86/core] " tip-bot2 for Borislav Petkov
  2 siblings, 0 replies; 14+ messages in thread
From: Brijesh Singh @ 2021-10-05 20:32 UTC (permalink / raw)
  To: Borislav Petkov, Joerg Roedel
  Cc: brijesh.singh, Tom Lendacky, linux-kernel, x86, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin



On 10/4/21 4:41 PM, Borislav Petkov wrote:
> Yap,
> 
> here's v2, now tested. It seems we do need that optional_stack_size
> second arg to ESTACKS_MEMBERS(), thx Tom.
> 
> ---
> From: Borislav Petkov <bp@suse.de>
> 
> The size of the exception stacks was increased by the commit in Fixes,
> resulting in stack sizes greater than a page in size. The #VC exception
> handling was only mapping the first (bottom) page, resulting in an
> SEV-ES guest failing to boot.
> 
> Make the #VC exception stacks part of the default exception stacks
> storage and allocate them with a CONFIG_AMD_MEM_ENCRYPT=y .config. Map
> them only when a SEV-ES guest has been detected.
> 
> Rip out the custom VC stacks mapping and storage code.
> 
>   [ bp: Steal and adapt Tom's commit message. ]
> 
> Fixes: 7fae4c24a2b8 ("x86: Increase exception stack sizes")
> Signed-off-by: Borislav Petkov <bp@suse.de>
> ---

Tested-by: Brijesh Singh <brijesh.singh@amd.com>

thanks

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [tip: x86/core] x86/sev: Make the #VC exception stacks part of the default stacks storage
  2021-10-04 21:41               ` [PATCH -v2] " Borislav Petkov
  2021-10-05 16:28                 ` Tom Lendacky
  2021-10-05 20:32                 ` Brijesh Singh
@ 2021-10-06 19:56                 ` tip-bot2 for Borislav Petkov
  2 siblings, 0 replies; 14+ messages in thread
From: tip-bot2 for Borislav Petkov @ 2021-10-06 19:56 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Borislav Petkov, Tom Lendacky, Brijesh Singh, x86, linux-kernel

The following commit has been merged into the x86/core branch of tip:

Commit-ID:     541ac97186d9ea88491961a46284de3603c914fd
Gitweb:        https://git.kernel.org/tip/541ac97186d9ea88491961a46284de3603c914fd
Author:        Borislav Petkov <bp@suse.de>
AuthorDate:    Fri, 01 Oct 2021 21:41:20 +02:00
Committer:     Borislav Petkov <bp@suse.de>
CommitterDate: Wed, 06 Oct 2021 21:48:27 +02:00

x86/sev: Make the #VC exception stacks part of the default stacks storage

The size of the exception stacks was increased by the commit in Fixes,
resulting in stack sizes greater than a page in size. The #VC exception
handling was only mapping the first (bottom) page, resulting in an
SEV-ES guest failing to boot.

Make the #VC exception stacks part of the default exception stacks
storage and allocate them with a CONFIG_AMD_MEM_ENCRYPT=y .config. Map
them only when a SEV-ES guest has been detected.

Rip out the custom VC stacks mapping and storage code.

 [ bp: Steal and adapt Tom's commit message. ]

Fixes: 7fae4c24a2b8 ("x86: Increase exception stack sizes")
Signed-off-by: Borislav Petkov <bp@suse.de>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Link: https://lkml.kernel.org/r/YVt1IMjIs7pIZTRR@zn.tnic
---
 arch/x86/include/asm/cpu_entry_area.h |  8 ++++++-
 arch/x86/kernel/sev.c                 | 32 +--------------------------
 arch/x86/mm/cpu_entry_area.c          |  7 ++++++-
 3 files changed, 14 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
index 3d52b09..dd5ea1b 100644
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -10,6 +10,12 @@
 
 #ifdef CONFIG_X86_64
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+#define VC_EXCEPTION_STKSZ	EXCEPTION_STKSZ
+#else
+#define VC_EXCEPTION_STKSZ	0
+#endif
+
 /* Macro to enforce the same ordering and stack sizes */
 #define ESTACKS_MEMBERS(guardsize, optional_stack_size)		\
 	char	DF_stack_guard[guardsize];			\
@@ -28,7 +34,7 @@
 
 /* The exception stacks' physical storage. No guard pages required */
 struct exception_stacks {
-	ESTACKS_MEMBERS(0, 0)
+	ESTACKS_MEMBERS(0, VC_EXCEPTION_STKSZ)
 };
 
 /* The effective cpu entry area mapping with guard pages. */
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 53a6837..4d0d1c2 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -46,16 +46,6 @@ static struct ghcb __initdata *boot_ghcb;
 struct sev_es_runtime_data {
 	struct ghcb ghcb_page;
 
-	/* Physical storage for the per-CPU IST stack of the #VC handler */
-	char ist_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
-
-	/*
-	 * Physical storage for the per-CPU fall-back stack of the #VC handler.
-	 * The fall-back stack is used when it is not safe to switch back to the
-	 * interrupted stack in the #VC entry code.
-	 */
-	char fallback_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
-
 	/*
 	 * Reserve one page per CPU as backup storage for the unencrypted GHCB.
 	 * It is needed when an NMI happens while the #VC handler uses the real
@@ -99,27 +89,6 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
 /* Needed in vc_early_forward_exception */
 void do_early_exception(struct pt_regs *regs, int trapnr);
 
-static void __init setup_vc_stacks(int cpu)
-{
-	struct sev_es_runtime_data *data;
-	struct cpu_entry_area *cea;
-	unsigned long vaddr;
-	phys_addr_t pa;
-
-	data = per_cpu(runtime_data, cpu);
-	cea  = get_cpu_entry_area(cpu);
-
-	/* Map #VC IST stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC);
-	pa    = __pa(data->ist_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
-
-	/* Map VC fall-back stack */
-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC2);
-	pa    = __pa(data->fallback_stack);
-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
-}
-
 static __always_inline bool on_vc_stack(struct pt_regs *regs)
 {
 	unsigned long sp = regs->sp;
@@ -787,7 +756,6 @@ void __init sev_es_init_vc_handling(void)
 	for_each_possible_cpu(cpu) {
 		alloc_runtime_data(cpu);
 		init_ghcb(cpu);
-		setup_vc_stacks(cpu);
 	}
 
 	sev_es_setup_play_dead();
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index f5e1e60..6c2f1b7 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -110,6 +110,13 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu)
 	cea_map_stack(NMI);
 	cea_map_stack(DB);
 	cea_map_stack(MCE);
+
+	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) {
+		if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
+			cea_map_stack(VC);
+			cea_map_stack(VC2);
+		}
+	}
 }
 #else
 static inline void percpu_setup_exception_stacks(unsigned int cpu)

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-10-06 19:56 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-01  4:40 [PATCH] x86/sev: Fully map the #VC exception stacks Tom Lendacky
2021-10-01  4:49 ` Tom Lendacky
2021-10-01  8:57 ` Borislav Petkov
2021-10-01 11:50   ` Joerg Roedel
2021-10-01 12:58     ` Borislav Petkov
2021-10-01 13:00       ` Joerg Roedel
2021-10-01 13:29         ` Peter Zijlstra
2021-10-01 13:52         ` Borislav Petkov
2021-10-01 20:39           ` Borislav Petkov
2021-10-04 15:08             ` [PATCH] x86/sev: Make the #VC exception stacks part of the default stacks storage Borislav Petkov
2021-10-04 21:41               ` [PATCH -v2] " Borislav Petkov
2021-10-05 16:28                 ` Tom Lendacky
2021-10-05 20:32                 ` Brijesh Singh
2021-10-06 19:56                 ` [tip: x86/core] " tip-bot2 for Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).