All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH 0/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown
@ 2019-07-26 13:52 Andrew Cooper
  2019-07-26 13:52 ` [Xen-devel] [PATCH 1/2] xen/link: Introduce .bss.percpu.page_aligned Andrew Cooper
  2019-07-26 13:52 ` [Xen-devel] [PATCH 2/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown Andrew Cooper
  0 siblings, 2 replies; 11+ messages in thread
From: Andrew Cooper @ 2019-07-26 13:52 UTC (permalink / raw)
  To: Xen-devel
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Julien Grall,
	Jan Beulich, Volodymyr Babchuk, Roger Pau Monné

Andrew Cooper (2):
  xen/link: Introduce .bss.percpu.page_aligned
  x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown

 xen/arch/arm/xen.lds.S          | 5 +++--
 xen/arch/x86/setup.c            | 2 --
 xen/arch/x86/traps.c            | 6 ++++++
 xen/arch/x86/xen.lds.S          | 7 +++++--
 xen/include/asm-x86/processor.h | 4 ++--
 xen/include/xen/percpu.h        | 2 ++
 6 files changed, 18 insertions(+), 8 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Xen-devel] [PATCH 1/2] xen/link: Introduce .bss.percpu.page_aligned
  2019-07-26 13:52 [Xen-devel] [PATCH 0/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown Andrew Cooper
@ 2019-07-26 13:52 ` Andrew Cooper
  2019-07-26 14:22   ` Roger Pau Monné
  2019-07-26 13:52 ` [Xen-devel] [PATCH 2/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown Andrew Cooper
  1 sibling, 1 reply; 11+ messages in thread
From: Andrew Cooper @ 2019-07-26 13:52 UTC (permalink / raw)
  To: Xen-devel
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Julien Grall,
	Jan Beulich, Volodymyr Babchuk, Roger Pau Monné

Future changes are going to need to page align some percpu data.

This means that the percpu area needs suiably aligning in the BSS so CPU0 has
correctly aligned data.  Shuffle the exact link order of items within the BSS
to give .bss.percpu.page_aligned appropriate alignment.

Additionally, introduce DEFINE_PER_CPU_PAGE_ALIGNED()

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
---
 xen/arch/arm/xen.lds.S   | 5 +++--
 xen/arch/x86/xen.lds.S   | 5 +++--
 xen/include/xen/percpu.h | 2 ++
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 12c107f45d..07cbdf2543 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -201,14 +201,15 @@ SECTIONS
        *(.bss.stack_aligned)
        . = ALIGN(PAGE_SIZE);
        *(.bss.page_aligned)
-       *(.bss)
-       . = ALIGN(SMP_CACHE_BYTES);
        __per_cpu_start = .;
+       *(.bss.percpu.page_aligned)
        *(.bss.percpu)
        . = ALIGN(SMP_CACHE_BYTES);
        *(.bss.percpu.read_mostly)
        . = ALIGN(SMP_CACHE_BYTES);
        __per_cpu_data_end = .;
+       *(.bss)
+       . = ALIGN(SMP_CACHE_BYTES);
        __bss_end = .;
   } :text
   _end = . ;
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index a73139cd29..b8a2ea4259 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -293,14 +293,15 @@ SECTIONS
        __bss_start = .;
        *(.bss.stack_aligned)
        *(.bss.page_aligned*)
-       *(.bss)
-       . = ALIGN(SMP_CACHE_BYTES);
        __per_cpu_start = .;
+       *(.bss.percpu.page_aligned)
        *(.bss.percpu)
        . = ALIGN(SMP_CACHE_BYTES);
        *(.bss.percpu.read_mostly)
        . = ALIGN(SMP_CACHE_BYTES);
        __per_cpu_data_end = .;
+       *(.bss)
+       . = ALIGN(SMP_CACHE_BYTES);
        __bss_end = .;
   } :text
   _end = . ;
diff --git a/xen/include/xen/percpu.h b/xen/include/xen/percpu.h
index aeec5c19d6..c5291dc5e9 100644
--- a/xen/include/xen/percpu.h
+++ b/xen/include/xen/percpu.h
@@ -10,6 +10,8 @@
  * macro expanded, while still allowing a per-architecture symbol name prefix.
  */
 #define DEFINE_PER_CPU(type, name) __DEFINE_PER_CPU(type, _##name, )
+#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
+	__DEFINE_PER_CPU(type, _##name, .page_aligned)
 #define DEFINE_PER_CPU_READ_MOSTLY(type, name) \
 	__DEFINE_PER_CPU(type, _##name, .read_mostly)
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Xen-devel] [PATCH 2/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown
  2019-07-26 13:52 [Xen-devel] [PATCH 0/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown Andrew Cooper
  2019-07-26 13:52 ` [Xen-devel] [PATCH 1/2] xen/link: Introduce .bss.percpu.page_aligned Andrew Cooper
@ 2019-07-26 13:52 ` Andrew Cooper
  2019-07-26 14:38   ` Roger Pau Monné
  1 sibling, 1 reply; 11+ messages in thread
From: Andrew Cooper @ 2019-07-26 13:52 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The XPTI work restricted the visibility of most of memory, but missed a few
aspects when it came to the TSS.

Given that the TSS is just an object in percpu data, the 4k mapping for it
created in setup_cpu_root_pgt() maps adjacent percpu data, making it all
leakable via Meltdown, even when XPTI is in use.

Furthermore, no care is taken to check that the TSS doesn't cross a page
boundary.  As it turns out, struct tss_struct is aligned on its size which
does prevent it straddling a page boundary, but this will cease to be true
once CET and Shadow Stack support is added to Xen.

Move the TSS into the page aligned percpu area, so no adjacent data can be
leaked.  Move the definition from setup.c to traps.c, which is a more
appropriate place for it to live.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/setup.c            | 2 --
 xen/arch/x86/traps.c            | 6 ++++++
 xen/arch/x86/xen.lds.S          | 2 ++
 xen/include/asm-x86/processor.h | 4 ++--
 4 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index d2011910fa..1a2ffc4dc1 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -100,8 +100,6 @@ unsigned long __read_mostly xen_phys_start;
 
 unsigned long __read_mostly xen_virt_end;
 
-DEFINE_PER_CPU(struct tss_struct, init_tss);
-
 char __section(".bss.stack_aligned") __aligned(STACK_SIZE)
     cpu0_stack[STACK_SIZE];
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 38d12013db..e4b4587956 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -108,6 +108,12 @@ idt_entry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
 /* Pointer to the IDT of every CPU. */
 idt_entry_t *idt_tables[NR_CPUS] __read_mostly;
 
+/*
+ * The TSS is smaller than a page, but we give it a full page to avoid
+ * adjacent per-cpu data leaking via Meltdown when XPTI is in use.
+ */
+DEFINE_PER_CPU_PAGE_ALIGNED(struct __aligned(PAGE_SIZE) tss_struct, init_tss);
+
 bool (*ioemul_handle_quirk)(
     u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs);
 
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index b8a2ea4259..c82e1e504a 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -368,6 +368,8 @@ ASSERT(IS_ALIGNED(__2M_rwdata_end,   SECTION_ALIGN), "__2M_rwdata_end misaligned
 
 ASSERT(IS_ALIGNED(cpu0_stack, STACK_SIZE), "cpu0_stack misaligned")
 
+ASSERT(IS_ALIGNED(per_cpu__init_tss, PAGE_SIZE), "per_cpu(init_tss) misaligned")
+
 ASSERT(IS_ALIGNED(__init_begin, PAGE_SIZE), "__init_begin misaligned")
 ASSERT(IS_ALIGNED(__init_end,   PAGE_SIZE), "__init_end misaligned")
 
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 2862321eee..b5bee94931 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -411,7 +411,7 @@ static always_inline void __mwait(unsigned long eax, unsigned long ecx)
 #define IOBMP_BYTES             8192
 #define IOBMP_INVALID_OFFSET    0x8000
 
-struct __packed __cacheline_aligned tss_struct {
+struct __packed tss_struct {
     uint32_t :32;
     uint64_t rsp0, rsp1, rsp2;
     uint64_t :64;
@@ -425,6 +425,7 @@ struct __packed __cacheline_aligned tss_struct {
     /* Pads the TSS to be cacheline-aligned (total size is 0x80). */
     uint8_t __cacheline_filler[24];
 };
+DECLARE_PER_CPU(struct tss_struct, init_tss);
 
 #define IST_NONE 0UL
 #define IST_DF   1UL
@@ -463,7 +464,6 @@ static inline void disable_each_ist(idt_entry_t *idt)
 extern idt_entry_t idt_table[];
 extern idt_entry_t *idt_tables[];
 
-DECLARE_PER_CPU(struct tss_struct, init_tss);
 DECLARE_PER_CPU(root_pgentry_t *, root_pgt);
 
 extern void write_ptbase(struct vcpu *v);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] xen/link: Introduce .bss.percpu.page_aligned
  2019-07-26 13:52 ` [Xen-devel] [PATCH 1/2] xen/link: Introduce .bss.percpu.page_aligned Andrew Cooper
@ 2019-07-26 14:22   ` Roger Pau Monné
  2019-07-26 14:30     ` Andrew Cooper
  0 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monné @ 2019-07-26 14:22 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Stefano Stabellini, Wei Liu, Julien Grall, Jan Beulich,
	Xen-devel, Volodymyr Babchuk

On Fri, Jul 26, 2019 at 02:52:39PM +0100, Andrew Cooper wrote:
> Future changes are going to need to page align some percpu data.
> 
> This means that the percpu area needs suiably aligning in the BSS so CPU0 has
> correctly aligned data.  Shuffle the exact link order of items within the BSS
> to give .bss.percpu.page_aligned appropriate alignment.
> 
> Additionally, introduce DEFINE_PER_CPU_PAGE_ALIGNED()
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> ---
>  xen/arch/arm/xen.lds.S   | 5 +++--
>  xen/arch/x86/xen.lds.S   | 5 +++--
>  xen/include/xen/percpu.h | 2 ++
>  3 files changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
> index 12c107f45d..07cbdf2543 100644
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -201,14 +201,15 @@ SECTIONS
>         *(.bss.stack_aligned)
>         . = ALIGN(PAGE_SIZE);
>         *(.bss.page_aligned)
> -       *(.bss)
> -       . = ALIGN(SMP_CACHE_BYTES);

Don't you also need a:

. = ALIGN(PAGE_SIZE);

here? Or is the size of .bss.page_aligned also aligned to page size?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] xen/link: Introduce .bss.percpu.page_aligned
  2019-07-26 14:22   ` Roger Pau Monné
@ 2019-07-26 14:30     ` Andrew Cooper
  2019-07-26 14:38       ` Jan Beulich
  0 siblings, 1 reply; 11+ messages in thread
From: Andrew Cooper @ 2019-07-26 14:30 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Julien Grall, Jan Beulich,
	Xen-devel, Volodymyr Babchuk

On 26/07/2019 15:22, Roger Pau Monné wrote:
> On Fri, Jul 26, 2019 at 02:52:39PM +0100, Andrew Cooper wrote:
>> Future changes are going to need to page align some percpu data.
>>
>> This means that the percpu area needs suiably aligning in the BSS so CPU0 has
>> correctly aligned data.  Shuffle the exact link order of items within the BSS
>> to give .bss.percpu.page_aligned appropriate alignment.
>>
>> Additionally, introduce DEFINE_PER_CPU_PAGE_ALIGNED()
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> ---
>>  xen/arch/arm/xen.lds.S   | 5 +++--
>>  xen/arch/x86/xen.lds.S   | 5 +++--
>>  xen/include/xen/percpu.h | 2 ++
>>  3 files changed, 8 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
>> index 12c107f45d..07cbdf2543 100644
>> --- a/xen/arch/arm/xen.lds.S
>> +++ b/xen/arch/arm/xen.lds.S
>> @@ -201,14 +201,15 @@ SECTIONS
>>         *(.bss.stack_aligned)
>>         . = ALIGN(PAGE_SIZE);
>>         *(.bss.page_aligned)
>> -       *(.bss)
>> -       . = ALIGN(SMP_CACHE_BYTES);
> Don't you also need a:
>
> . = ALIGN(PAGE_SIZE);
>
> here?

No, (I don't think so).

> Or is the size of .bss.page_aligned also aligned to page size?

Every object inside .bss.page_aligned should have suitable (i.e.
multiple of) size and alignment.  Without this, things will break.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown
  2019-07-26 13:52 ` [Xen-devel] [PATCH 2/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown Andrew Cooper
@ 2019-07-26 14:38   ` Roger Pau Monné
  2019-07-26 14:45     ` Andrew Cooper
  0 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monné @ 2019-07-26 14:38 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Jan Beulich

On Fri, Jul 26, 2019 at 02:52:40PM +0100, Andrew Cooper wrote:
> The XPTI work restricted the visibility of most of memory, but missed a few
> aspects when it came to the TSS.
> 
> Given that the TSS is just an object in percpu data, the 4k mapping for it
> created in setup_cpu_root_pgt() maps adjacent percpu data, making it all
> leakable via Meltdown, even when XPTI is in use.
> 
> Furthermore, no care is taken to check that the TSS doesn't cross a page
> boundary.  As it turns out, struct tss_struct is aligned on its size which
> does prevent it straddling a page boundary, but this will cease to be true
> once CET and Shadow Stack support is added to Xen.
> 
> Move the TSS into the page aligned percpu area, so no adjacent data can be
> leaked.  Move the definition from setup.c to traps.c, which is a more
> appropriate place for it to live.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/setup.c            | 2 --
>  xen/arch/x86/traps.c            | 6 ++++++
>  xen/arch/x86/xen.lds.S          | 2 ++
>  xen/include/asm-x86/processor.h | 4 ++--
>  4 files changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index d2011910fa..1a2ffc4dc1 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -100,8 +100,6 @@ unsigned long __read_mostly xen_phys_start;
>  
>  unsigned long __read_mostly xen_virt_end;
>  
> -DEFINE_PER_CPU(struct tss_struct, init_tss);
> -
>  char __section(".bss.stack_aligned") __aligned(STACK_SIZE)
>      cpu0_stack[STACK_SIZE];
>  
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 38d12013db..e4b4587956 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -108,6 +108,12 @@ idt_entry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
>  /* Pointer to the IDT of every CPU. */
>  idt_entry_t *idt_tables[NR_CPUS] __read_mostly;
>  
> +/*
> + * The TSS is smaller than a page, but we give it a full page to avoid
> + * adjacent per-cpu data leaking via Meltdown when XPTI is in use.
> + */
> +DEFINE_PER_CPU_PAGE_ALIGNED(struct __aligned(PAGE_SIZE) tss_struct, init_tss);

Can't you bundle the __aligned attribute in
DEFINE_PER_CPU_PAGE_ALIGNED itself?

#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
    __DEFINE_PER_CPU(type __aligned(PAGE_SIZE), _##name, .page_aligned)

I haven't tested this TBH. I'm also not able to find how __typeof__
(used by __DEFINE_PER_CPU) behaves regarding attributes, but I'm quite
sure it keeps them or else lots of things will break.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] xen/link: Introduce .bss.percpu.page_aligned
  2019-07-26 14:30     ` Andrew Cooper
@ 2019-07-26 14:38       ` Jan Beulich
  2019-07-26 14:50         ` Andrew Cooper
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2019-07-26 14:38 UTC (permalink / raw)
  To: Andrew Cooper, Roger Pau Monné
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, VolodymyrBabchuk, Wei Liu

On 26.07.2019 16:30, Andrew Cooper wrote:
> On 26/07/2019 15:22, Roger Pau Monné wrote:
>> On Fri, Jul 26, 2019 at 02:52:39PM +0100, Andrew Cooper wrote:
>>> Future changes are going to need to page align some percpu data.
>>>
>>> This means that the percpu area needs suiably aligning in the BSS so CPU0 has
>>> correctly aligned data.  Shuffle the exact link order of items within the BSS
>>> to give .bss.percpu.page_aligned appropriate alignment.
>>>
>>> Additionally, introduce DEFINE_PER_CPU_PAGE_ALIGNED()
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> ---
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Wei Liu <wl@xen.org>
>>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> CC: Julien Grall <julien.grall@arm.com>
>>> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>>> ---
>>>   xen/arch/arm/xen.lds.S   | 5 +++--
>>>   xen/arch/x86/xen.lds.S   | 5 +++--
>>>   xen/include/xen/percpu.h | 2 ++
>>>   3 files changed, 8 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
>>> index 12c107f45d..07cbdf2543 100644
>>> --- a/xen/arch/arm/xen.lds.S
>>> +++ b/xen/arch/arm/xen.lds.S
>>> @@ -201,14 +201,15 @@ SECTIONS
>>>          *(.bss.stack_aligned)
>>>          . = ALIGN(PAGE_SIZE);
>>>          *(.bss.page_aligned)
>>> -       *(.bss)
>>> -       . = ALIGN(SMP_CACHE_BYTES);
>> Don't you also need a:
>>
>> . = ALIGN(PAGE_SIZE);
>>
>> here?
> 
> No, (I don't think so).
> 
>> Or is the size of .bss.page_aligned also aligned to page size?
> 
> Every object inside .bss.page_aligned should have suitable (i.e.
> multiple of) size and alignment.  Without this, things will break.

I'm not sure we should have such a requirement: Objects in
.*.page_aligned sections should themselves have PAGE_SIZE alignment
(i.e. there shouldn't be a need to ALIGN() _ahead_ of the section
directive in the script (that is, the one in context above should
actually be redundant). But I'm not sure about demanding their
size to be a multiple of PAGE_SIZE - while C will guarantee this
(and waste space in certain cases), assembly constructs could still
be written such that the trailing unused space can be re-used. Otoh
I agree with your virtual statement of this being a little fragile.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown
  2019-07-26 14:38   ` Roger Pau Monné
@ 2019-07-26 14:45     ` Andrew Cooper
  2019-07-26 15:19       ` Roger Pau Monné
  0 siblings, 1 reply; 11+ messages in thread
From: Andrew Cooper @ 2019-07-26 14:45 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Xen-devel, Wei Liu, Jan Beulich

On 26/07/2019 15:38, Roger Pau Monné wrote:
> On Fri, Jul 26, 2019 at 02:52:40PM +0100, Andrew Cooper wrote:
>> The XPTI work restricted the visibility of most of memory, but missed a few
>> aspects when it came to the TSS.
>>
>> Given that the TSS is just an object in percpu data, the 4k mapping for it
>> created in setup_cpu_root_pgt() maps adjacent percpu data, making it all
>> leakable via Meltdown, even when XPTI is in use.
>>
>> Furthermore, no care is taken to check that the TSS doesn't cross a page
>> boundary.  As it turns out, struct tss_struct is aligned on its size which
>> does prevent it straddling a page boundary, but this will cease to be true
>> once CET and Shadow Stack support is added to Xen.
>>
>> Move the TSS into the page aligned percpu area, so no adjacent data can be
>> leaked.  Move the definition from setup.c to traps.c, which is a more
>> appropriate place for it to live.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>>  xen/arch/x86/setup.c            | 2 --
>>  xen/arch/x86/traps.c            | 6 ++++++
>>  xen/arch/x86/xen.lds.S          | 2 ++
>>  xen/include/asm-x86/processor.h | 4 ++--
>>  4 files changed, 10 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
>> index d2011910fa..1a2ffc4dc1 100644
>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -100,8 +100,6 @@ unsigned long __read_mostly xen_phys_start;
>>  
>>  unsigned long __read_mostly xen_virt_end;
>>  
>> -DEFINE_PER_CPU(struct tss_struct, init_tss);
>> -
>>  char __section(".bss.stack_aligned") __aligned(STACK_SIZE)
>>      cpu0_stack[STACK_SIZE];
>>  
>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>> index 38d12013db..e4b4587956 100644
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -108,6 +108,12 @@ idt_entry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
>>  /* Pointer to the IDT of every CPU. */
>>  idt_entry_t *idt_tables[NR_CPUS] __read_mostly;
>>  
>> +/*
>> + * The TSS is smaller than a page, but we give it a full page to avoid
>> + * adjacent per-cpu data leaking via Meltdown when XPTI is in use.
>> + */
>> +DEFINE_PER_CPU_PAGE_ALIGNED(struct __aligned(PAGE_SIZE) tss_struct, init_tss);
> Can't you bundle the __aligned attribute in
> DEFINE_PER_CPU_PAGE_ALIGNED itself?
>
> #define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
>     __DEFINE_PER_CPU(type __aligned(PAGE_SIZE), _##name, .page_aligned)
>
> I haven't tested this TBH.

I did.  It doesn't compile, because the attribute follows the declaration.

Observe that the patch puts __aligned() between struct and tss_struct.

There is no way to do this inside DEFINE_PER_CPU_PAGE_ALIGNED(), because
we can't break the type apart to insert __aligned() in the middle.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] xen/link: Introduce .bss.percpu.page_aligned
  2019-07-26 14:38       ` Jan Beulich
@ 2019-07-26 14:50         ` Andrew Cooper
  0 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2019-07-26 14:50 UTC (permalink / raw)
  To: Jan Beulich, Roger Pau Monné
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, VolodymyrBabchuk, Wei Liu

On 26/07/2019 15:38, Jan Beulich wrote:
> On 26.07.2019 16:30, Andrew Cooper wrote:
>> On 26/07/2019 15:22, Roger Pau Monné wrote:
>>> On Fri, Jul 26, 2019 at 02:52:39PM +0100, Andrew Cooper wrote:
>>>> Future changes are going to need to page align some percpu data.
>>>>
>>>> This means that the percpu area needs suiably aligning in the BSS so CPU0 has
>>>> correctly aligned data.  Shuffle the exact link order of items within the BSS
>>>> to give .bss.percpu.page_aligned appropriate alignment.
>>>>
>>>> Additionally, introduce DEFINE_PER_CPU_PAGE_ALIGNED()
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> ---
>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>> CC: Wei Liu <wl@xen.org>
>>>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>> CC: Julien Grall <julien.grall@arm.com>
>>>> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>>>> ---
>>>>   xen/arch/arm/xen.lds.S   | 5 +++--
>>>>   xen/arch/x86/xen.lds.S   | 5 +++--
>>>>   xen/include/xen/percpu.h | 2 ++
>>>>   3 files changed, 8 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
>>>> index 12c107f45d..07cbdf2543 100644
>>>> --- a/xen/arch/arm/xen.lds.S
>>>> +++ b/xen/arch/arm/xen.lds.S
>>>> @@ -201,14 +201,15 @@ SECTIONS
>>>>          *(.bss.stack_aligned)
>>>>          . = ALIGN(PAGE_SIZE);
>>>>          *(.bss.page_aligned)
>>>> -       *(.bss)
>>>> -       . = ALIGN(SMP_CACHE_BYTES);
>>> Don't you also need a:
>>>
>>> . = ALIGN(PAGE_SIZE);
>>>
>>> here?
>> No, (I don't think so).
>>
>>> Or is the size of .bss.page_aligned also aligned to page size?
>> Every object inside .bss.page_aligned should have suitable (i.e.
>> multiple of) size and alignment.  Without this, things will break.
> I'm not sure we should have such a requirement: Objects in
> .*.page_aligned sections should themselves have PAGE_SIZE alignment
> (i.e. there shouldn't be a need to ALIGN() _ahead_ of the section
> directive in the script (that is, the one in context above should
> actually be redundant). But I'm not sure about demanding their
> size to be a multiple of PAGE_SIZE - while C will guarantee this
> (and waste space in certain cases), assembly constructs could still
> be written such that the trailing unused space can be re-used. Otoh
> I agree with your virtual statement of this being a little fragile.

I suppose that all which really matters is that each object has page
alignment, and that will cause things to properly laid out.

Where this goes wrong is for an object which isn't a multiple of 4k,
followed by an object which doesn't have 4k alignment.

Either way, I think the patch is fine as is.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown
  2019-07-26 14:45     ` Andrew Cooper
@ 2019-07-26 15:19       ` Roger Pau Monné
  2019-07-26 16:45         ` Julien Grall
  0 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monné @ 2019-07-26 15:19 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Jan Beulich

On Fri, Jul 26, 2019 at 03:45:22PM +0100, Andrew Cooper wrote:
> On 26/07/2019 15:38, Roger Pau Monné wrote:
> > On Fri, Jul 26, 2019 at 02:52:40PM +0100, Andrew Cooper wrote:
> >> The XPTI work restricted the visibility of most of memory, but missed a few
> >> aspects when it came to the TSS.
> >>
> >> Given that the TSS is just an object in percpu data, the 4k mapping for it
> >> created in setup_cpu_root_pgt() maps adjacent percpu data, making it all
> >> leakable via Meltdown, even when XPTI is in use.
> >>
> >> Furthermore, no care is taken to check that the TSS doesn't cross a page
> >> boundary.  As it turns out, struct tss_struct is aligned on its size which
> >> does prevent it straddling a page boundary, but this will cease to be true
> >> once CET and Shadow Stack support is added to Xen.
> >>
> >> Move the TSS into the page aligned percpu area, so no adjacent data can be
> >> leaked.  Move the definition from setup.c to traps.c, which is a more
> >> appropriate place for it to live.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> ---
> >> CC: Jan Beulich <JBeulich@suse.com>
> >> CC: Wei Liu <wl@xen.org>
> >> CC: Roger Pau Monné <roger.pau@citrix.com>
> >> ---
> >>  xen/arch/x86/setup.c            | 2 --
> >>  xen/arch/x86/traps.c            | 6 ++++++
> >>  xen/arch/x86/xen.lds.S          | 2 ++
> >>  xen/include/asm-x86/processor.h | 4 ++--
> >>  4 files changed, 10 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> >> index d2011910fa..1a2ffc4dc1 100644
> >> --- a/xen/arch/x86/setup.c
> >> +++ b/xen/arch/x86/setup.c
> >> @@ -100,8 +100,6 @@ unsigned long __read_mostly xen_phys_start;
> >>  
> >>  unsigned long __read_mostly xen_virt_end;
> >>  
> >> -DEFINE_PER_CPU(struct tss_struct, init_tss);
> >> -
> >>  char __section(".bss.stack_aligned") __aligned(STACK_SIZE)
> >>      cpu0_stack[STACK_SIZE];
> >>  
> >> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> >> index 38d12013db..e4b4587956 100644
> >> --- a/xen/arch/x86/traps.c
> >> +++ b/xen/arch/x86/traps.c
> >> @@ -108,6 +108,12 @@ idt_entry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
> >>  /* Pointer to the IDT of every CPU. */
> >>  idt_entry_t *idt_tables[NR_CPUS] __read_mostly;
> >>  
> >> +/*
> >> + * The TSS is smaller than a page, but we give it a full page to avoid
> >> + * adjacent per-cpu data leaking via Meltdown when XPTI is in use.
> >> + */
> >> +DEFINE_PER_CPU_PAGE_ALIGNED(struct __aligned(PAGE_SIZE) tss_struct, init_tss);
> > Can't you bundle the __aligned attribute in
> > DEFINE_PER_CPU_PAGE_ALIGNED itself?
> >
> > #define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
> >     __DEFINE_PER_CPU(type __aligned(PAGE_SIZE), _##name, .page_aligned)
> >
> > I haven't tested this TBH.
> 
> I did.  It doesn't compile, because the attribute follows the declaration.
> 
> Observe that the patch puts __aligned() between struct and tss_struct.
> 
> There is no way to do this inside DEFINE_PER_CPU_PAGE_ALIGNED(), because
> we can't break the type apart to insert __aligned() in the middle.

And doing something like:

#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
    __DEFINE_PER_CPU(type, _##name, .page_aligned) __aligned(PAGE_SIZE)

Won't work either I guess?

I just find it unconformable to have to specify the aligned
attribute in every usage of DEFINE_PER_CPU_PAGE_ALIGNED.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown
  2019-07-26 15:19       ` Roger Pau Monné
@ 2019-07-26 16:45         ` Julien Grall
  0 siblings, 0 replies; 11+ messages in thread
From: Julien Grall @ 2019-07-26 16:45 UTC (permalink / raw)
  To: Roger Pau Monné, Andrew Cooper; +Cc: Xen-devel, Jan Beulich, Wei Liu

Hi,

On 26/07/2019 16:19, Roger Pau Monné wrote:
> On Fri, Jul 26, 2019 at 03:45:22PM +0100, Andrew Cooper wrote:
>> On 26/07/2019 15:38, Roger Pau Monné wrote:
>>> On Fri, Jul 26, 2019 at 02:52:40PM +0100, Andrew Cooper wrote:
>> There is no way to do this inside DEFINE_PER_CPU_PAGE_ALIGNED(), because
>> we can't break the type apart to insert __aligned() in the middle.
> 
> And doing something like:
> 
> #define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
>      __DEFINE_PER_CPU(type, _##name, .page_aligned) __aligned(PAGE_SIZE)
> 
> Won't work either I guess?
> 
> I just find it unconformable to have to specify the aligned
> attribute in every usage of DEFINE_PER_CPU_PAGE_ALIGNED.

I was about to comment the same on patch #1 and decided to look at one of the 
caller first.

Because of the name, I was expecting the macro to actually do the alignment for 
you as well.

If it is not possible, then I think we should add a comment so developers know 
they have to add the page-alignment themselves.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-07-26 16:45 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-26 13:52 [Xen-devel] [PATCH 0/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown Andrew Cooper
2019-07-26 13:52 ` [Xen-devel] [PATCH 1/2] xen/link: Introduce .bss.percpu.page_aligned Andrew Cooper
2019-07-26 14:22   ` Roger Pau Monné
2019-07-26 14:30     ` Andrew Cooper
2019-07-26 14:38       ` Jan Beulich
2019-07-26 14:50         ` Andrew Cooper
2019-07-26 13:52 ` [Xen-devel] [PATCH 2/2] x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown Andrew Cooper
2019-07-26 14:38   ` Roger Pau Monné
2019-07-26 14:45     ` Andrew Cooper
2019-07-26 15:19       ` Roger Pau Monné
2019-07-26 16:45         ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.