All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
	x86@kernel.org, Andy Lutomirski <luto@kernel.org>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: Re: [patch V2 09/29] x86/exceptions: Add structs for exception stacks
Date: Fri, 5 Apr 2019 13:50:34 -0700	[thread overview]
Message-ID: <20190405205034.GD15808@linux.intel.com> (raw)
In-Reply-To: <20190405204838.GC15808@linux.intel.com>

On Fri, Apr 05, 2019 at 01:48:38PM -0700, Sean Christopherson wrote:
> On Fri, Apr 05, 2019 at 05:07:07PM +0200, Thomas Gleixner wrote:
> > At the moment everything assumes a full linear mapping of the various
> > exception stacks. Adding guard pages to the cpu entry area mapping of the
> > exception stacks will break that assumption.
> > 
> > As a preparatory step convert both the real storage and the effective
> > mapping in the cpu entry area from character arrays to structures.
> > 
> > To ensure that both arrays have the same ordering and the same size of the
> > individual stacks fill the members with a macro. The guard size is the only
> > difference between the two resulting structures. For now both have guard
> > size 0 until the preparation of all usage sites is done.
> > 
> > Provide a couple of helper macros which are used in the following
> > conversions.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > ---
> >  arch/x86/include/asm/cpu_entry_area.h |   51 ++++++++++++++++++++++++++++++----
> >  arch/x86/kernel/cpu/common.c          |    2 -
> >  arch/x86/mm/cpu_entry_area.c          |    8 ++---
> >  3 files changed, 50 insertions(+), 11 deletions(-)
> > 
> > --- a/arch/x86/include/asm/cpu_entry_area.h
> > +++ b/arch/x86/include/asm/cpu_entry_area.h
> > @@ -7,6 +7,50 @@
> >  #include <asm/processor.h>
> >  #include <asm/intel_ds.h>
> >  
> > +#ifdef CONFIG_X86_64
> > +
> > +/* Macro to enforce the same ordering and stack sizes */
> > +#define ESTACKS_MEMBERS(guardsize)		\
> > +	char	DF_stack[EXCEPTION_STKSZ];	\
> > +	char	DF_stack_guard[guardsize];	\
> > +	char	NMI_stack[EXCEPTION_STKSZ];	\
> > +	char	NMI_stack_guard[guardsize];	\
> > +	char	DB_stack[DEBUG_STKSZ];		\
> > +	char	DB_stack_guard[guardsize];	\
> > +	char	MCE_stack[EXCEPTION_STKSZ];	\
> > +	char	MCE_stack_guard[guardsize];	\
> 
> Conceptually, shouldn't the stack guard precede its associated stack
> since the stacks grow down?  And don't we want a guard page below the
> DF_stack?  There could still be a guard page above MCE_stack,
> e.g. IST_stack_guard or something.
> 
> E.g. the example in patch "Speedup in_exception_stack()" also suggests

Gah, the example is from "x86/exceptions: Split debug IST stack".

> that "guard page" is associated with the stack physical above it:
> 
>       --- top of DB_stack       <- Initial stack
>       --- end of DB_stack
>           guard page
> 
>       --- top of DB1_stack      <- Top of stack after entering first #DB
>       --- end of DB1_stack
>           guard page
> 
>       --- top of DB2_stack      <- Top of stack after entering second #DB
>       --- end of DB2_stack
>           guard page
> 
> > +
> > +/* The exception stacks linear storage. No guard pages required */
> > +struct exception_stacks {
> > +	ESTACKS_MEMBERS(0)
> > +};
> > +
> > +/*
> > + * The effective cpu entry area mapping with guard pages. Guard size is
> > + * zero until the code which makes assumptions about linear mapping is
> > + * cleaned up.
> > + */
> > +struct cea_exception_stacks {
> > +	ESTACKS_MEMBERS(0)
> > +};
> > +
> > +#define CEA_ESTACK_TOP(ceastp, st)			\
> > +	((unsigned long)&(ceastp)->st## _stack_guard)
> 
> IMO, using the stack guard to define the top of stack is unnecessarily
> confusing and fragile, e.g. reordering the names of the stack guards
> would break this macro.
> 
> What about:
> 
> #define CEA_ESTACK_TOP(ceastp, st)			\
> 	(CEA_ESTACK_BOT(ceastp, st) + CEA_ESTACK_SIZE(st))
> 
> > +#define CEA_ESTACK_BOT(ceastp, st)			\
> > +	((unsigned long)&(ceastp)->st## _stack)
> > +
> > +#define CEA_ESTACK_OFFS(st)					\
> > +	offsetof(struct cea_exception_stacks, st## _stack)
> > +
> > +#define CEA_ESTACK_SIZE(st)					\
> > +	sizeof(((struct cea_exception_stacks *)0)->st## _stack)
> > +
> > +#define CEA_ESTACK_PAGES					\
> > +	(sizeof(struct cea_exception_stacks) / PAGE_SIZE)
> > +
> > +#endif
> > +
> >  /*
> >   * cpu_entry_area is a percpu region that contains things needed by the CPU
> >   * and early entry/exit code.  Real types aren't used for all fields here
> > @@ -32,12 +76,9 @@ struct cpu_entry_area {
> >  
> >  #ifdef CONFIG_X86_64
> >  	/*
> > -	 * Exception stacks used for IST entries.
> > -	 *
> > -	 * In the future, this should have a separate slot for each stack
> > -	 * with guard pages between them.
> > +	 * Exception stacks used for IST entries with guard pages.
> >  	 */
> > -	char exception_stacks[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ];
> > +	struct cea_exception_stacks estacks;
> >  #endif
> >  #ifdef CONFIG_CPU_SUP_INTEL
> >  	/*
> > --- a/arch/x86/kernel/cpu/common.c
> > +++ b/arch/x86/kernel/cpu/common.c
> > @@ -1754,7 +1754,7 @@ void cpu_init(void)
> >  	 * set up and load the per-CPU TSS
> >  	 */
> >  	if (!oist->ist[0]) {
> > -		char *estacks = get_cpu_entry_area(cpu)->exception_stacks;
> > +		char *estacks = (char *)&get_cpu_entry_area(cpu)->estacks;
> >  
> >  		for (v = 0; v < N_EXCEPTION_STACKS; v++) {
> >  			estacks += exception_stack_sizes[v];
> > --- a/arch/x86/mm/cpu_entry_area.c
> > +++ b/arch/x86/mm/cpu_entry_area.c
> > @@ -13,8 +13,7 @@
> >  static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage);
> >  
> >  #ifdef CONFIG_X86_64
> > -static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
> > -	[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]);
> > +static DEFINE_PER_CPU_PAGE_ALIGNED(struct exception_stacks, exception_stacks);
> >  #endif
> >  
> >  struct cpu_entry_area *get_cpu_entry_area(int cpu)
> > @@ -138,9 +137,8 @@ static void __init setup_cpu_entry_area(
> >  #ifdef CONFIG_X86_64
> >  	BUILD_BUG_ON(sizeof(exception_stacks) % PAGE_SIZE != 0);
> >  	BUILD_BUG_ON(sizeof(exception_stacks) !=
> > -		     sizeof(((struct cpu_entry_area *)0)->exception_stacks));
> > -	cea_map_percpu_pages(&cea->exception_stacks,
> > -			     &per_cpu(exception_stacks, cpu),
> > +		     sizeof(((struct cpu_entry_area *)0)->estacks));
> > +	cea_map_percpu_pages(&cea->estacks, &per_cpu(exception_stacks, cpu),
> >  			     sizeof(exception_stacks) / PAGE_SIZE, PAGE_KERNEL);
> >  #endif
> >  	percpu_setup_debug_store(cpu);
> > 
> > 

  reply	other threads:[~2019-04-05 20:50 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-05 15:06 [patch V2 00/29] x86: Add guard pages to exception and interrupt stacks Thomas Gleixner
2019-04-05 15:06 ` [patch V2 01/29] x86/irq/64: Limit IST stack overflow check to #DB stack Thomas Gleixner
2019-04-05 15:07 ` [patch V2 02/29] x86/dumpstack: Fix off-by-one errors in stack identification Thomas Gleixner
2019-04-05 15:44   ` Sean Christopherson
2019-04-05 15:07 ` [patch V2 03/29] x86/irq/64: Remove a hardcoded irq_stack_union access Thomas Gleixner
2019-04-05 16:37   ` Sean Christopherson
2019-04-05 16:38     ` Sean Christopherson
2019-04-05 17:18     ` Josh Poimboeuf
2019-04-05 17:47       ` Thomas Gleixner
2019-04-05 15:07 ` [patch V2 04/29] x86/irq/64: Sanitize the top/bottom confusion Thomas Gleixner
2019-04-05 16:54   ` Sean Christopherson
2019-04-05 15:07 ` [patch V2 05/29] x86/idt: Remove unused macro SISTG Thomas Gleixner
2019-04-05 15:07 ` [patch V2 06/29] x86/exceptions: Remove unused stack defines on 32bit Thomas Gleixner
2019-04-05 15:07 ` [patch V2 07/29] x86/exceptions: Make IST index zero based Thomas Gleixner
2019-04-05 18:59   ` Sean Christopherson
2019-04-05 15:07 ` [patch V2 08/29] x86/cpu_entry_area: Cleanup setup functions Thomas Gleixner
2019-04-05 19:25   ` Sean Christopherson
2019-04-05 15:07 ` [patch V2 09/29] x86/exceptions: Add structs for exception stacks Thomas Gleixner
2019-04-05 20:48   ` Sean Christopherson
2019-04-05 20:50     ` Sean Christopherson [this message]
2019-04-05 21:00     ` Thomas Gleixner
2019-04-05 15:07 ` [patch V2 10/29] x86/cpu_entry_area: Prepare for IST guard pages Thomas Gleixner
2019-04-05 15:07 ` [patch V2 11/29] x86/cpu_entry_area: Provide exception stack accessor Thomas Gleixner
2019-04-05 15:07 ` [patch V2 12/29] x86/traps: Use cpu_entry_area instead of orig_ist Thomas Gleixner
2019-04-05 15:07 ` [patch V2 13/29] x86/irq/64: Use cpu entry area " Thomas Gleixner
2019-04-05 15:07 ` [patch V2 14/29] x86/dumpstack/64: Use cpu_entry_area " Thomas Gleixner
2019-04-05 15:07 ` [patch V2 15/29] x86/cpu: Prepare TSS.IST setup for guard pages Thomas Gleixner
2019-04-05 15:07 ` [patch V2 16/29] x86/cpu: Remove orig_ist array Thomas Gleixner
2019-04-05 15:07 ` [patch V2 17/29] x86/exceptions: Disconnect IST index and stack order Thomas Gleixner
2019-04-05 21:57   ` Josh Poimboeuf
2019-04-05 22:00     ` Thomas Gleixner
2019-04-05 15:07 ` [patch V2 18/29] x86/exceptions: Enable IST guard pages Thomas Gleixner
2019-04-05 15:07 ` [patch V2 19/29] x86/exceptions: Split debug IST stack Thomas Gleixner
2019-04-05 20:55   ` Sean Christopherson
2019-04-05 21:01     ` Thomas Gleixner
2019-04-05 15:07 ` [patch V2 20/29] x86/dumpstack/64: Speedup in_exception_stack() Thomas Gleixner
2019-04-05 21:55   ` Josh Poimboeuf
2019-04-05 21:56     ` Thomas Gleixner
2019-04-05 15:07 ` [patch V2 21/29] x86/irq/32: Define IRQ_STACK_SIZE Thomas Gleixner
2019-04-05 15:07 ` [patch V2 22/29] x86/irq/32: Make irq stack a character array Thomas Gleixner
2019-04-05 15:07 ` [patch V2 23/29] x86/irq/32: Rename hard/softirq_stack to hard/softirq_stack_ptr Thomas Gleixner
2019-04-05 15:07 ` [patch V2 24/29] x86/irq/64: Rename irq_stack_ptr to hardirq_stack_ptr Thomas Gleixner
2019-04-05 15:07 ` [patch V2 25/29] x86/irq/32: Invoke irq_ctx_init() from init_IRQ() Thomas Gleixner
2019-04-05 15:27   ` Juergen Gross
2019-04-05 15:07 ` [patch V2 26/29] x86/irq/32: Handle irq stack allocation failure proper Thomas Gleixner
2019-04-05 15:07 ` [patch V2 27/29] x86/irq/64: Split the IRQ stack into its own pages Thomas Gleixner
2019-04-05 15:07 ` [patch V2 28/29] x86/irq/64: Remap the IRQ stack with guard pages Thomas Gleixner
2019-04-07  4:56   ` Andy Lutomirski
2019-04-07  6:08     ` Thomas Gleixner
2019-04-07  9:28       ` Andy Lutomirski
2019-04-07  9:34         ` Thomas Gleixner
2019-04-07 14:03           ` Andy Lutomirski
2019-04-07 22:44     ` Thomas Gleixner
2019-04-08  2:23       ` Andy Lutomirski
2019-04-08  6:46         ` Thomas Gleixner
2019-04-08 16:18           ` Andy Lutomirski
2019-04-08 16:36             ` Josh Poimboeuf
2019-04-08 16:44             ` Thomas Gleixner
2019-04-08 18:19               ` Thomas Gleixner
2019-04-05 15:07 ` [patch V2 29/29] x86/irq/64: Remove stack overflow debug code Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190405205034.GD15808@linux.intel.com \
    --to=sean.j.christopherson@intel.com \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.