All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes
@ 2022-11-10 20:34 Sean Christopherson
  2022-11-10 20:35 ` [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping Sean Christopherson
                   ` (5 more replies)
  0 siblings, 6 replies; 25+ messages in thread
From: Sean Christopherson @ 2022-11-10 20:34 UTC (permalink / raw)
  To: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, Andrey Ryabinin
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	Sean Christopherson, syzbot+ffb4f000dc2872c93f62,
	syzbot+8cdd16fd5a6c0565e227

Three fixes for the recent changes to how KASAN populates shadows for
the per-CPU portion of the CPU entry areas.  The v1 versions were posted
independently as I kept root causing issues after posting individual fixes.

v2:
  - Map the entire per-CPU area in one shot. [Andrey]
  - Use the "early", i.e. read-only, variant to populate the shadow for
    the shared portion (read-only IDT mapping) of the CEA. [Andrey]

v1:
  - https://lore.kernel.org/all/20221104212433.1339826-1-seanjc@google.com
  - https://lore.kernel.org/all/20221104220053.1702977-1-seanjc@google.com
  - https://lore.kernel.org/all/20221104183247.834988-1-seanjc@google.com

Sean Christopherson (5):
  x86/mm: Recompute physical address for every page of per-CPU CEA
    mapping
  x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry
    area
  x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names
  x86/kasan: Add helpers to align shadow addresses up and down
  x86/kasan: Populate shadow for shared chunk of the CPU entry area

 arch/x86/mm/cpu_entry_area.c | 10 +++-----
 arch/x86/mm/kasan_init_64.c  | 50 +++++++++++++++++++++++-------------
 2 files changed, 36 insertions(+), 24 deletions(-)


base-commit: 0008712a508f72242d185142cfdbd0646a661a18
-- 
2.38.1.431.g37b22c650d-goog


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping
  2022-11-10 20:34 [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Sean Christopherson
@ 2022-11-10 20:35 ` Sean Christopherson
  2022-11-14 14:09   ` Andrey Ryabinin
                     ` (2 more replies)
  2022-11-10 20:35 ` [PATCH v2 2/5] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area Sean Christopherson
                   ` (4 subsequent siblings)
  5 siblings, 3 replies; 25+ messages in thread
From: Sean Christopherson @ 2022-11-10 20:35 UTC (permalink / raw)
  To: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, Andrey Ryabinin
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	Sean Christopherson, syzbot+ffb4f000dc2872c93f62,
	syzbot+8cdd16fd5a6c0565e227

Recompute the physical address for each per-CPU page in the CPU entry
area, a recent commit inadvertantly modified cea_map_percpu_pages() such
that every PTE is mapped to the physical address of the first page.

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/mm/cpu_entry_area.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index dff9001e5e12..d831aae94b41 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
 					early_pfn_to_nid(PFN_DOWN(pa)));
 
 	for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
-		cea_set_pte(cea_vaddr, pa, prot);
+		cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
 }
 
 static void __init percpu_setup_debug_store(unsigned int cpu)
-- 
2.38.1.431.g37b22c650d-goog


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 2/5] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area
  2022-11-10 20:34 [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Sean Christopherson
  2022-11-10 20:35 ` [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping Sean Christopherson
@ 2022-11-10 20:35 ` Sean Christopherson
  2022-11-14 14:10   ` Andrey Ryabinin
                     ` (2 more replies)
  2022-11-10 20:35 ` [PATCH v2 3/5] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names Sean Christopherson
                   ` (3 subsequent siblings)
  5 siblings, 3 replies; 25+ messages in thread
From: Sean Christopherson @ 2022-11-10 20:35 UTC (permalink / raw)
  To: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, Andrey Ryabinin
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	Sean Christopherson, syzbot+ffb4f000dc2872c93f62,
	syzbot+8cdd16fd5a6c0565e227

Populate a KASAN shadow for the entire possible per-CPU range of the CPU
entry area instead of requiring that each individual chunk map a shadow.
Mapping shadows individually is error prone, e.g. the per-CPU GDT mapping
was left behind, which can lead to not-present page faults during KASAN
validation if the kernel performs a software lookup into the GDT.  The DS
buffer is also likely affected.

The motivation for mapping the per-CPU areas on-demand was to avoid
mapping the entire 512GiB range that's reserved for the CPU entry area,
shaving a few bytes by not creating shadows for potentially unused memory
was not a goal.

The bug is most easily reproduced by doing a sigreturn with a garbage
CS in the sigcontext, e.g.

  int main(void)
  {
    struct sigcontext regs;

    syscall(__NR_mmap, 0x1ffff000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul);
    syscall(__NR_mmap, 0x20000000ul, 0x1000000ul, 7ul, 0x32ul, -1, 0ul);
    syscall(__NR_mmap, 0x21000000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul);

    memset(&regs, 0, sizeof(regs));
    regs.cs = 0x1d0;
    syscall(__NR_rt_sigreturn);
    return 0;
  }

to coerce the kernel into doing a GDT lookup to compute CS.base when
reading the instruction bytes on the subsequent #GP to determine whether
or not the #GP is something the kernel should handle, e.g. to fixup UMIP
violations or to emulate CLI/STI for IOPL=3 applications.

  BUG: unable to handle page fault for address: fffffbc8379ace00
  #PF: supervisor read access in kernel mode
  #PF: error_code(0x0000) - not-present page
  PGD 16c03a067 P4D 16c03a067 PUD 15b990067 PMD 15b98f067 PTE 0
  Oops: 0000 [#1] PREEMPT SMP KASAN
  CPU: 3 PID: 851 Comm: r2 Not tainted 6.1.0-rc3-next-20221103+ #432
  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
  RIP: 0010:kasan_check_range+0xdf/0x190
  Call Trace:
   <TASK>
   get_desc+0xb0/0x1d0
   insn_get_seg_base+0x104/0x270
   insn_fetch_from_user+0x66/0x80
   fixup_umip_exception+0xb1/0x530
   exc_general_protection+0x181/0x210
   asm_exc_general_protection+0x22/0x30
  RIP: 0003:0x0
  Code: Unable to access opcode bytes at 0xffffffffffffffd6.
  RSP: 0003:0000000000000000 EFLAGS: 00000202
  RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000000001d0
  RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
  RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
  R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
  R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
   </TASK>

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Reported-by: syzbot+ffb4f000dc2872c93f62@syzkaller.appspotmail.com
Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: kasan-dev@googlegroups.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/mm/cpu_entry_area.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index d831aae94b41..7c855dffcdc2 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -91,11 +91,6 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
 static void __init
 cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
 {
-	phys_addr_t pa = per_cpu_ptr_to_phys(ptr);
-
-	kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE,
-					early_pfn_to_nid(PFN_DOWN(pa)));
-
 	for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
 		cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
 }
@@ -195,6 +190,9 @@ static void __init setup_cpu_entry_area(unsigned int cpu)
 	pgprot_t tss_prot = PAGE_KERNEL;
 #endif
 
+	kasan_populate_shadow_for_vaddr(cea, CPU_ENTRY_AREA_SIZE,
+					early_cpu_to_node(cpu));
+
 	cea_set_pte(&cea->gdt, get_cpu_gdt_paddr(cpu), gdt_prot);
 
 	cea_map_percpu_pages(&cea->entry_stack_page,
-- 
2.38.1.431.g37b22c650d-goog


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 3/5] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names
  2022-11-10 20:34 [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Sean Christopherson
  2022-11-10 20:35 ` [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping Sean Christopherson
  2022-11-10 20:35 ` [PATCH v2 2/5] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area Sean Christopherson
@ 2022-11-10 20:35 ` Sean Christopherson
  2022-11-14 14:10   ` Andrey Ryabinin
                     ` (2 more replies)
  2022-11-10 20:35 ` [PATCH v2 4/5] x86/kasan: Add helpers to align shadow addresses up and down Sean Christopherson
                   ` (2 subsequent siblings)
  5 siblings, 3 replies; 25+ messages in thread
From: Sean Christopherson @ 2022-11-10 20:35 UTC (permalink / raw)
  To: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, Andrey Ryabinin
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	Sean Christopherson, syzbot+ffb4f000dc2872c93f62,
	syzbot+8cdd16fd5a6c0565e227

Rename the CPU entry area variables in kasan_init() to shorten their
names, a future fix will reference the beginning of the per-CPU portion
of the CPU entry area, and shadow_cpu_entry_per_cpu_begin is a bit much.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/mm/kasan_init_64.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index d1416926ad52..ad7872ae10ed 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -331,7 +331,7 @@ void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
 void __init kasan_init(void)
 {
 	int i;
-	void *shadow_cpu_entry_begin, *shadow_cpu_entry_end;
+	void *shadow_cea_begin, *shadow_cea_end;
 
 	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
 
@@ -372,16 +372,16 @@ void __init kasan_init(void)
 		map_range(&pfn_mapped[i]);
 	}
 
-	shadow_cpu_entry_begin = (void *)CPU_ENTRY_AREA_BASE;
-	shadow_cpu_entry_begin = kasan_mem_to_shadow(shadow_cpu_entry_begin);
-	shadow_cpu_entry_begin = (void *)round_down(
-			(unsigned long)shadow_cpu_entry_begin, PAGE_SIZE);
+	shadow_cea_begin = (void *)CPU_ENTRY_AREA_BASE;
+	shadow_cea_begin = kasan_mem_to_shadow(shadow_cea_begin);
+	shadow_cea_begin = (void *)round_down(
+			(unsigned long)shadow_cea_begin, PAGE_SIZE);
 
-	shadow_cpu_entry_end = (void *)(CPU_ENTRY_AREA_BASE +
+	shadow_cea_end = (void *)(CPU_ENTRY_AREA_BASE +
 					CPU_ENTRY_AREA_MAP_SIZE);
-	shadow_cpu_entry_end = kasan_mem_to_shadow(shadow_cpu_entry_end);
-	shadow_cpu_entry_end = (void *)round_up(
-			(unsigned long)shadow_cpu_entry_end, PAGE_SIZE);
+	shadow_cea_end = kasan_mem_to_shadow(shadow_cea_end);
+	shadow_cea_end = (void *)round_up(
+			(unsigned long)shadow_cea_end, PAGE_SIZE);
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
@@ -403,9 +403,9 @@ void __init kasan_init(void)
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)VMALLOC_END + 1),
-		shadow_cpu_entry_begin);
+		shadow_cea_begin);
 
-	kasan_populate_early_shadow(shadow_cpu_entry_end,
+	kasan_populate_early_shadow(shadow_cea_end,
 			kasan_mem_to_shadow((void *)__START_KERNEL_map));
 
 	kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext),
-- 
2.38.1.431.g37b22c650d-goog


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 4/5] x86/kasan: Add helpers to align shadow addresses up and down
  2022-11-10 20:34 [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Sean Christopherson
                   ` (2 preceding siblings ...)
  2022-11-10 20:35 ` [PATCH v2 3/5] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names Sean Christopherson
@ 2022-11-10 20:35 ` Sean Christopherson
  2022-11-14 14:13   ` Andrey Ryabinin
                     ` (2 more replies)
  2022-11-10 20:35 ` [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area Sean Christopherson
  2022-11-14 11:57 ` [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Peter Zijlstra
  5 siblings, 3 replies; 25+ messages in thread
From: Sean Christopherson @ 2022-11-10 20:35 UTC (permalink / raw)
  To: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, Andrey Ryabinin
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	Sean Christopherson, syzbot+ffb4f000dc2872c93f62,
	syzbot+8cdd16fd5a6c0565e227

Add helpers to dedup code for aligning shadow address up/down to page
boundaries when translating an address to its shadow.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/mm/kasan_init_64.c | 40 ++++++++++++++++++++-----------------
 1 file changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index ad7872ae10ed..afc5e129ca7b 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -316,22 +316,33 @@ void __init kasan_early_init(void)
 	kasan_map_early_shadow(init_top_pgt);
 }
 
+static unsigned long kasan_mem_to_shadow_align_down(unsigned long va)
+{
+	unsigned long shadow = (unsigned long)kasan_mem_to_shadow((void *)va);
+
+	return round_down(shadow, PAGE_SIZE);
+}
+
+static unsigned long kasan_mem_to_shadow_align_up(unsigned long va)
+{
+	unsigned long shadow = (unsigned long)kasan_mem_to_shadow((void *)va);
+
+	return round_up(shadow, PAGE_SIZE);
+}
+
 void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
 {
 	unsigned long shadow_start, shadow_end;
 
-	shadow_start = (unsigned long)kasan_mem_to_shadow(va);
-	shadow_start = round_down(shadow_start, PAGE_SIZE);
-	shadow_end = (unsigned long)kasan_mem_to_shadow(va + size);
-	shadow_end = round_up(shadow_end, PAGE_SIZE);
-
+	shadow_start = kasan_mem_to_shadow_align_down((unsigned long)va);
+	shadow_end = kasan_mem_to_shadow_align_up((unsigned long)va + size);
 	kasan_populate_shadow(shadow_start, shadow_end, nid);
 }
 
 void __init kasan_init(void)
 {
+	unsigned long shadow_cea_begin, shadow_cea_end;
 	int i;
-	void *shadow_cea_begin, *shadow_cea_end;
 
 	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
 
@@ -372,16 +383,9 @@ void __init kasan_init(void)
 		map_range(&pfn_mapped[i]);
 	}
 
-	shadow_cea_begin = (void *)CPU_ENTRY_AREA_BASE;
-	shadow_cea_begin = kasan_mem_to_shadow(shadow_cea_begin);
-	shadow_cea_begin = (void *)round_down(
-			(unsigned long)shadow_cea_begin, PAGE_SIZE);
-
-	shadow_cea_end = (void *)(CPU_ENTRY_AREA_BASE +
-					CPU_ENTRY_AREA_MAP_SIZE);
-	shadow_cea_end = kasan_mem_to_shadow(shadow_cea_end);
-	shadow_cea_end = (void *)round_up(
-			(unsigned long)shadow_cea_end, PAGE_SIZE);
+	shadow_cea_begin = kasan_mem_to_shadow_align_down(CPU_ENTRY_AREA_BASE);
+	shadow_cea_end = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_BASE +
+						      CPU_ENTRY_AREA_MAP_SIZE);
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
@@ -403,9 +407,9 @@ void __init kasan_init(void)
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)VMALLOC_END + 1),
-		shadow_cea_begin);
+		(void *)shadow_cea_begin);
 
-	kasan_populate_early_shadow(shadow_cea_end,
+	kasan_populate_early_shadow((void *)shadow_cea_end,
 			kasan_mem_to_shadow((void *)__START_KERNEL_map));
 
 	kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext),
-- 
2.38.1.431.g37b22c650d-goog


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area
  2022-11-10 20:34 [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Sean Christopherson
                   ` (3 preceding siblings ...)
  2022-11-10 20:35 ` [PATCH v2 4/5] x86/kasan: Add helpers to align shadow addresses up and down Sean Christopherson
@ 2022-11-10 20:35 ` Sean Christopherson
  2022-11-14 14:44   ` Andrey Ryabinin
                     ` (2 more replies)
  2022-11-14 11:57 ` [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Peter Zijlstra
  5 siblings, 3 replies; 25+ messages in thread
From: Sean Christopherson @ 2022-11-10 20:35 UTC (permalink / raw)
  To: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, Andrey Ryabinin
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	Sean Christopherson, syzbot+ffb4f000dc2872c93f62,
	syzbot+8cdd16fd5a6c0565e227

Popuplate the shadow for the shared portion of the CPU entry area, i.e.
the read-only IDT mapping, during KASAN initialization.  A recent change
modified KASAN to map the per-CPU areas on-demand, but forgot to keep a
shadow for the common area that is shared amongst all CPUs.

Map the common area in KASAN init instead of letting idt_map_in_cea() do
the dirty work so that it Just Works in the unlikely event more shared
data is shoved into the CPU entry area.

The bug manifests as a not-present #PF when software attempts to lookup
an IDT entry, e.g. when KVM is handling IRQs on Intel CPUs (KVM performs
direct CALL to the IRQ handler to avoid the overhead of INTn):

 BUG: unable to handle page fault for address: fffffbc0000001d8
 #PF: supervisor read access in kernel mode
 #PF: error_code(0x0000) - not-present page
 PGD 16c03a067 P4D 16c03a067 PUD 0
 Oops: 0000 [#1] PREEMPT SMP KASAN
 CPU: 5 PID: 901 Comm: repro Tainted: G        W          6.1.0-rc3+ #410
 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
 RIP: 0010:kasan_check_range+0xdf/0x190
  vmx_handle_exit_irqoff+0x152/0x290 [kvm_intel]
  vcpu_run+0x1d89/0x2bd0 [kvm]
  kvm_arch_vcpu_ioctl_run+0x3ce/0xa70 [kvm]
  kvm_vcpu_ioctl+0x349/0x900 [kvm]
  __x64_sys_ioctl+0xb8/0xf0
  do_syscall_64+0x2b/0x50
  entry_SYSCALL_64_after_hwframe+0x46/0xb0

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Reported-by: syzbot+8cdd16fd5a6c0565e227@syzkaller.appspotmail.com
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/mm/kasan_init_64.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index afc5e129ca7b..af82046348a0 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -341,7 +341,7 @@ void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
 
 void __init kasan_init(void)
 {
-	unsigned long shadow_cea_begin, shadow_cea_end;
+	unsigned long shadow_cea_begin, shadow_cea_per_cpu_begin, shadow_cea_end;
 	int i;
 
 	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
@@ -384,6 +384,7 @@ void __init kasan_init(void)
 	}
 
 	shadow_cea_begin = kasan_mem_to_shadow_align_down(CPU_ENTRY_AREA_BASE);
+	shadow_cea_per_cpu_begin = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_PER_CPU);
 	shadow_cea_end = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_BASE +
 						      CPU_ENTRY_AREA_MAP_SIZE);
 
@@ -409,6 +410,15 @@ void __init kasan_init(void)
 		kasan_mem_to_shadow((void *)VMALLOC_END + 1),
 		(void *)shadow_cea_begin);
 
+	/*
+	 * Populate the shadow for the shared portion of the CPU entry area.
+	 * Shadows for the per-CPU areas are mapped on-demand, as each CPU's
+	 * area is randomly placed somewhere in the 512GiB range and mapping
+	 * the entire 512GiB range is prohibitively expensive.
+	 */
+	kasan_populate_early_shadow((void *)shadow_cea_begin,
+				    (void *)shadow_cea_per_cpu_begin);
+
 	kasan_populate_early_shadow((void *)shadow_cea_end,
 			kasan_mem_to_shadow((void *)__START_KERNEL_map));
 
-- 
2.38.1.431.g37b22c650d-goog


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes
  2022-11-10 20:34 [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Sean Christopherson
                   ` (4 preceding siblings ...)
  2022-11-10 20:35 ` [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area Sean Christopherson
@ 2022-11-14 11:57 ` Peter Zijlstra
  5 siblings, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2022-11-14 11:57 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Dave Hansen, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, Andrey Ryabinin, H. Peter Anvin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, linux-kernel, kasan-dev,
	syzbot+ffb4f000dc2872c93f62, syzbot+8cdd16fd5a6c0565e227

On Thu, Nov 10, 2022 at 08:34:59PM +0000, Sean Christopherson wrote:
> Three fixes for the recent changes to how KASAN populates shadows for
> the per-CPU portion of the CPU entry areas.  The v1 versions were posted
> independently as I kept root causing issues after posting individual fixes.
> 
> v2:
>   - Map the entire per-CPU area in one shot. [Andrey]
>   - Use the "early", i.e. read-only, variant to populate the shadow for
>     the shared portion (read-only IDT mapping) of the CEA. [Andrey]
> 
> v1:
>   - https://lore.kernel.org/all/20221104212433.1339826-1-seanjc@google.com
>   - https://lore.kernel.org/all/20221104220053.1702977-1-seanjc@google.com
>   - https://lore.kernel.org/all/20221104183247.834988-1-seanjc@google.com
> 
> Sean Christopherson (5):
>   x86/mm: Recompute physical address for every page of per-CPU CEA
>     mapping
>   x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry
>     area
>   x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names
>   x86/kasan: Add helpers to align shadow addresses up and down
>   x86/kasan: Populate shadow for shared chunk of the CPU entry area
> 
>  arch/x86/mm/cpu_entry_area.c | 10 +++-----
>  arch/x86/mm/kasan_init_64.c  | 50 +++++++++++++++++++++++-------------
>  2 files changed, 36 insertions(+), 24 deletions(-)

Thanks for cleaning up that mess!

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping
  2022-11-10 20:35 ` [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping Sean Christopherson
@ 2022-11-14 14:09   ` Andrey Ryabinin
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: Andrey Ryabinin @ 2022-11-14 14:09 UTC (permalink / raw)
  To: Sean Christopherson, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	x86
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	syzbot+ffb4f000dc2872c93f62, syzbot+8cdd16fd5a6c0565e227



On 11/10/22 23:35, Sean Christopherson wrote:
> Recompute the physical address for each per-CPU page in the CPU entry
> area, a recent commit inadvertantly modified cea_map_percpu_pages() such
> that every PTE is mapped to the physical address of the first page.
> 
> Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/5] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area
  2022-11-10 20:35 ` [PATCH v2 2/5] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area Sean Christopherson
@ 2022-11-14 14:10   ` Andrey Ryabinin
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: Andrey Ryabinin @ 2022-11-14 14:10 UTC (permalink / raw)
  To: Sean Christopherson, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	x86
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	syzbot+ffb4f000dc2872c93f62, syzbot+8cdd16fd5a6c0565e227



On 11/10/22 23:35, Sean Christopherson wrote:
> Populate a KASAN shadow for the entire possible per-CPU range of the CPU
> entry area instead of requiring that each individual chunk map a shadow.
> Mapping shadows individually is error prone, e.g. the per-CPU GDT mapping
> was left behind, which can lead to not-present page faults during KASAN
> validation if the kernel performs a software lookup into the GDT.  The DS
> buffer is also likely affected.
> 
> The motivation for mapping the per-CPU areas on-demand was to avoid
> mapping the entire 512GiB range that's reserved for the CPU entry area,
> shaving a few bytes by not creating shadows for potentially unused memory
> was not a goal.
> 
> The bug is most easily reproduced by doing a sigreturn with a garbage
> CS in the sigcontext, e.g.
> 
>   int main(void)
>   {
>     struct sigcontext regs;
> 
>     syscall(__NR_mmap, 0x1ffff000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul);
>     syscall(__NR_mmap, 0x20000000ul, 0x1000000ul, 7ul, 0x32ul, -1, 0ul);
>     syscall(__NR_mmap, 0x21000000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul);
> 
>     memset(&regs, 0, sizeof(regs));
>     regs.cs = 0x1d0;
>     syscall(__NR_rt_sigreturn);
>     return 0;
>   }
> 
> to coerce the kernel into doing a GDT lookup to compute CS.base when
> reading the instruction bytes on the subsequent #GP to determine whether
> or not the #GP is something the kernel should handle, e.g. to fixup UMIP
> violations or to emulate CLI/STI for IOPL=3 applications.
> 
>   BUG: unable to handle page fault for address: fffffbc8379ace00
>   #PF: supervisor read access in kernel mode
>   #PF: error_code(0x0000) - not-present page
>   PGD 16c03a067 P4D 16c03a067 PUD 15b990067 PMD 15b98f067 PTE 0
>   Oops: 0000 [#1] PREEMPT SMP KASAN
>   CPU: 3 PID: 851 Comm: r2 Not tainted 6.1.0-rc3-next-20221103+ #432
>   Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
>   RIP: 0010:kasan_check_range+0xdf/0x190
>   Call Trace:
>    <TASK>
>    get_desc+0xb0/0x1d0
>    insn_get_seg_base+0x104/0x270
>    insn_fetch_from_user+0x66/0x80
>    fixup_umip_exception+0xb1/0x530
>    exc_general_protection+0x181/0x210
>    asm_exc_general_protection+0x22/0x30
>   RIP: 0003:0x0
>   Code: Unable to access opcode bytes at 0xffffffffffffffd6.
>   RSP: 0003:0000000000000000 EFLAGS: 00000202
>   RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000000001d0
>   RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
>   RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
>   R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
>   R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
>    </TASK>
> 
> Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
> Reported-by: syzbot+ffb4f000dc2872c93f62@syzkaller.appspotmail.com
> Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Andrey Konovalov <andreyknvl@gmail.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Cc: kasan-dev@googlegroups.com
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/mm/cpu_entry_area.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 

Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/5] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names
  2022-11-10 20:35 ` [PATCH v2 3/5] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names Sean Christopherson
@ 2022-11-14 14:10   ` Andrey Ryabinin
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: Andrey Ryabinin @ 2022-11-14 14:10 UTC (permalink / raw)
  To: Sean Christopherson, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	x86
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	syzbot+ffb4f000dc2872c93f62, syzbot+8cdd16fd5a6c0565e227



On 11/10/22 23:35, Sean Christopherson wrote:
> Rename the CPU entry area variables in kasan_init() to shorten their
> names, a future fix will reference the beginning of the per-CPU portion
> of the CPU entry area, and shadow_cpu_entry_per_cpu_begin is a bit much.
> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/mm/kasan_init_64.c | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
> 

Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/5] x86/kasan: Add helpers to align shadow addresses up and down
  2022-11-10 20:35 ` [PATCH v2 4/5] x86/kasan: Add helpers to align shadow addresses up and down Sean Christopherson
@ 2022-11-14 14:13   ` Andrey Ryabinin
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: Andrey Ryabinin @ 2022-11-14 14:13 UTC (permalink / raw)
  To: Sean Christopherson, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	x86
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	syzbot+ffb4f000dc2872c93f62, syzbot+8cdd16fd5a6c0565e227



On 11/10/22 23:35, Sean Christopherson wrote:
> Add helpers to dedup code for aligning shadow address up/down to page
> boundaries when translating an address to its shadow.
> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/mm/kasan_init_64.c | 40 ++++++++++++++++++++-----------------
>  1 file changed, 22 insertions(+), 18 deletions(-)
> 


Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area
  2022-11-10 20:35 ` [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area Sean Christopherson
@ 2022-11-14 14:44   ` Andrey Ryabinin
  2022-11-14 15:12     ` Peter Zijlstra
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 1 reply; 25+ messages in thread
From: Andrey Ryabinin @ 2022-11-14 14:44 UTC (permalink / raw)
  To: Sean Christopherson, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	x86
  Cc: H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	syzbot+ffb4f000dc2872c93f62, syzbot+8cdd16fd5a6c0565e227



On 11/10/22 23:35, Sean Christopherson wrote:

>  
> +	/*
> +	 * Populate the shadow for the shared portion of the CPU entry area.
> +	 * Shadows for the per-CPU areas are mapped on-demand, as each CPU's
> +	 * area is randomly placed somewhere in the 512GiB range and mapping
> +	 * the entire 512GiB range is prohibitively expensive.
> +	 */
> +	kasan_populate_early_shadow((void *)shadow_cea_begin,
> +				    (void *)shadow_cea_per_cpu_begin);
> +

I know I suggested to use "early" here, but I just realized that this might be a problem.
This will actually map shadow page for the 8 pages (KASAN_SHADOW_SCALE_SHIFT) of the original memory.
In case there is some per-cpu entry area starting right at CPU_ENTRY_AREA_PER_CPU the shadow for it will
be covered with kasan_early_shadow_page instead of the usual one.

So we need to go back to your v1 PATCH, or alternatively we can round up CPU_ENTRY_AREA_PER_CPU
#define CPU_ENTRY_AREA_PER_CPU		(CPU_ENTRY_AREA_RO_IDT + PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)

Such change will also require fixing up max_cea calculation in init_cea_offsets()


Going back kasan_populate_shadow() seems like safer and easier choice. The only disadvantage of it
that we might waste 1 page, which is not much compared to the KASAN memory overhead.



>  	kasan_populate_early_shadow((void *)shadow_cea_end,
>  			kasan_mem_to_shadow((void *)__START_KERNEL_map));
>  

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area
  2022-11-14 14:44   ` Andrey Ryabinin
@ 2022-11-14 15:12     ` Peter Zijlstra
  2022-11-14 17:53       ` Sean Christopherson
  0 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2022-11-14 15:12 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Sean Christopherson, Dave Hansen, Andy Lutomirski,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, linux-kernel, kasan-dev,
	syzbot+ffb4f000dc2872c93f62, syzbot+8cdd16fd5a6c0565e227

On Mon, Nov 14, 2022 at 05:44:00PM +0300, Andrey Ryabinin wrote:
> Going back kasan_populate_shadow() seems like safer and easier choice.
> The only disadvantage of it that we might waste 1 page, which is not
> much compared to the KASAN memory overhead.

So the below delta?

---
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -388,7 +388,7 @@ void __init kasan_init(void)
 	shadow_cea_end = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_BASE +
 						      CPU_ENTRY_AREA_MAP_SIZE);
 
-	kasan_populate_early_shadow(
+	kasan_populate_shadow(
 		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
 		kasan_mem_to_shadow((void *)VMALLOC_START));
 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area
  2022-11-14 15:12     ` Peter Zijlstra
@ 2022-11-14 17:53       ` Sean Christopherson
  2022-11-14 21:46         ` Peter Zijlstra
  0 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2022-11-14 17:53 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Andrey Ryabinin, Dave Hansen, Andy Lutomirski, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, linux-kernel, kasan-dev,
	syzbot+ffb4f000dc2872c93f62, syzbot+8cdd16fd5a6c0565e227

On Mon, Nov 14, 2022, Peter Zijlstra wrote:
> On Mon, Nov 14, 2022 at 05:44:00PM +0300, Andrey Ryabinin wrote:
> > Going back kasan_populate_shadow() seems like safer and easier choice.
> > The only disadvantage of it that we might waste 1 page, which is not
> > much compared to the KASAN memory overhead.
> 
> So the below delta?
> 
> ---
> --- a/arch/x86/mm/kasan_init_64.c
> +++ b/arch/x86/mm/kasan_init_64.c
> @@ -388,7 +388,7 @@ void __init kasan_init(void)
>  	shadow_cea_end = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_BASE +
>  						      CPU_ENTRY_AREA_MAP_SIZE);
>  
> -	kasan_populate_early_shadow(
> +	kasan_populate_shadow(
>  		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
>  		kasan_mem_to_shadow((void *)VMALLOC_START));

Wrong one, that's the existing mapping.  To get back to v1:

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index af82046348a0..0302491d799d 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -416,8 +416,8 @@ void __init kasan_init(void)
         * area is randomly placed somewhere in the 512GiB range and mapping
         * the entire 512GiB range is prohibitively expensive.
         */
-       kasan_populate_early_shadow((void *)shadow_cea_begin,
-                                   (void *)shadow_cea_per_cpu_begin);
+       kasan_populate_shadow(shadow_cea_begin,
+                             shadow_cea_per_cpu_begin, 0);
 
        kasan_populate_early_shadow((void *)shadow_cea_end,
                        kasan_mem_to_shadow((void *)__START_KERNEL_map));

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area
  2022-11-14 17:53       ` Sean Christopherson
@ 2022-11-14 21:46         ` Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2022-11-14 21:46 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Andrey Ryabinin, Dave Hansen, Andy Lutomirski, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, linux-kernel, kasan-dev,
	syzbot+ffb4f000dc2872c93f62, syzbot+8cdd16fd5a6c0565e227

On Mon, Nov 14, 2022 at 05:53:43PM +0000, Sean Christopherson wrote:

> Wrong one, that's the existing mapping.  To get back to v1:
> 
> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
> index af82046348a0..0302491d799d 100644
> --- a/arch/x86/mm/kasan_init_64.c
> +++ b/arch/x86/mm/kasan_init_64.c
> @@ -416,8 +416,8 @@ void __init kasan_init(void)
>          * area is randomly placed somewhere in the 512GiB range and mapping
>          * the entire 512GiB range is prohibitively expensive.
>          */
> -       kasan_populate_early_shadow((void *)shadow_cea_begin,
> -                                   (void *)shadow_cea_per_cpu_begin);
> +       kasan_populate_shadow(shadow_cea_begin,
> +                             shadow_cea_per_cpu_begin, 0);
>  
>         kasan_populate_early_shadow((void *)shadow_cea_end,
>                         kasan_mem_to_shadow((void *)__START_KERNEL_map));

OK. It now looks like so:

  https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git/commit/?h=x86/mm&id=14ca169feec3cb442ef4d322f8f65ba360f42784

If the robots don't hate on it because I fat fingered it or seomthing
stupid, I'll go push it out tomorrow.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/kasan: Populate shadow for shared chunk of the CPU entry area
  2022-11-10 20:35 ` [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area Sean Christopherson
  2022-11-14 14:44   ` Andrey Ryabinin
@ 2022-11-15 22:26   ` tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-11-15 22:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: syzbot+8cdd16fd5a6c0565e227, Sean Christopherson,
	Peter Zijlstra (Intel),
	x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     f2089aa0cd8e52564240a93ea1e4bb643c0ed34c
Gitweb:        https://git.kernel.org/tip/f2089aa0cd8e52564240a93ea1e4bb643c0ed34c
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:04 
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 15 Nov 2022 22:30:00 +01:00

x86/kasan: Populate shadow for shared chunk of the CPU entry area

Popuplate the shadow for the shared portion of the CPU entry area, i.e.
the read-only IDT mapping, during KASAN initialization.  A recent change
modified KASAN to map the per-CPU areas on-demand, but forgot to keep a
shadow for the common area that is shared amongst all CPUs.

Map the common area in KASAN init instead of letting idt_map_in_cea() do
the dirty work so that it Just Works in the unlikely event more shared
data is shoved into the CPU entry area.

The bug manifests as a not-present #PF when software attempts to lookup
an IDT entry, e.g. when KVM is handling IRQs on Intel CPUs (KVM performs
direct CALL to the IRQ handler to avoid the overhead of INTn):

 BUG: unable to handle page fault for address: fffffbc0000001d8
 #PF: supervisor read access in kernel mode
 #PF: error_code(0x0000) - not-present page
 PGD 16c03a067 P4D 16c03a067 PUD 0
 Oops: 0000 [#1] PREEMPT SMP KASAN
 CPU: 5 PID: 901 Comm: repro Tainted: G        W          6.1.0-rc3+ #410
 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
 RIP: 0010:kasan_check_range+0xdf/0x190
  vmx_handle_exit_irqoff+0x152/0x290 [kvm_intel]
  vcpu_run+0x1d89/0x2bd0 [kvm]
  kvm_arch_vcpu_ioctl_run+0x3ce/0xa70 [kvm]
  kvm_vcpu_ioctl+0x349/0x900 [kvm]
  __x64_sys_ioctl+0xb8/0xf0
  do_syscall_64+0x2b/0x50
  entry_SYSCALL_64_after_hwframe+0x46/0xb0

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Reported-by: syzbot+8cdd16fd5a6c0565e227@syzkaller.appspotmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20221110203504.1985010-6-seanjc@google.com
---
 arch/x86/mm/kasan_init_64.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index afc5e12..0302491 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -341,7 +341,7 @@ void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
 
 void __init kasan_init(void)
 {
-	unsigned long shadow_cea_begin, shadow_cea_end;
+	unsigned long shadow_cea_begin, shadow_cea_per_cpu_begin, shadow_cea_end;
 	int i;
 
 	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
@@ -384,6 +384,7 @@ void __init kasan_init(void)
 	}
 
 	shadow_cea_begin = kasan_mem_to_shadow_align_down(CPU_ENTRY_AREA_BASE);
+	shadow_cea_per_cpu_begin = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_PER_CPU);
 	shadow_cea_end = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_BASE +
 						      CPU_ENTRY_AREA_MAP_SIZE);
 
@@ -409,6 +410,15 @@ void __init kasan_init(void)
 		kasan_mem_to_shadow((void *)VMALLOC_END + 1),
 		(void *)shadow_cea_begin);
 
+	/*
+	 * Populate the shadow for the shared portion of the CPU entry area.
+	 * Shadows for the per-CPU areas are mapped on-demand, as each CPU's
+	 * area is randomly placed somewhere in the 512GiB range and mapping
+	 * the entire 512GiB range is prohibitively expensive.
+	 */
+	kasan_populate_shadow(shadow_cea_begin,
+			      shadow_cea_per_cpu_begin, 0);
+
 	kasan_populate_early_shadow((void *)shadow_cea_end,
 			kasan_mem_to_shadow((void *)__START_KERNEL_map));
 

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/kasan: Add helpers to align shadow addresses up and down
  2022-11-10 20:35 ` [PATCH v2 4/5] x86/kasan: Add helpers to align shadow addresses up and down Sean Christopherson
  2022-11-14 14:13   ` Andrey Ryabinin
@ 2022-11-15 22:26   ` tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-11-15 22:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Sean Christopherson, Peter Zijlstra (Intel),
	Andrey Ryabinin, x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     74b5a69c2a577d4fdba581171e3ebf33cddbddc1
Gitweb:        https://git.kernel.org/tip/74b5a69c2a577d4fdba581171e3ebf33cddbddc1
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:03 
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 15 Nov 2022 22:29:59 +01:00

x86/kasan: Add helpers to align shadow addresses up and down

Add helpers to dedup code for aligning shadow address up/down to page
boundaries when translating an address to its shadow.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Link: https://lkml.kernel.org/r/20221110203504.1985010-5-seanjc@google.com
---
 arch/x86/mm/kasan_init_64.c | 40 +++++++++++++++++++-----------------
 1 file changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index ad7872a..afc5e12 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -316,22 +316,33 @@ void __init kasan_early_init(void)
 	kasan_map_early_shadow(init_top_pgt);
 }
 
+static unsigned long kasan_mem_to_shadow_align_down(unsigned long va)
+{
+	unsigned long shadow = (unsigned long)kasan_mem_to_shadow((void *)va);
+
+	return round_down(shadow, PAGE_SIZE);
+}
+
+static unsigned long kasan_mem_to_shadow_align_up(unsigned long va)
+{
+	unsigned long shadow = (unsigned long)kasan_mem_to_shadow((void *)va);
+
+	return round_up(shadow, PAGE_SIZE);
+}
+
 void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
 {
 	unsigned long shadow_start, shadow_end;
 
-	shadow_start = (unsigned long)kasan_mem_to_shadow(va);
-	shadow_start = round_down(shadow_start, PAGE_SIZE);
-	shadow_end = (unsigned long)kasan_mem_to_shadow(va + size);
-	shadow_end = round_up(shadow_end, PAGE_SIZE);
-
+	shadow_start = kasan_mem_to_shadow_align_down((unsigned long)va);
+	shadow_end = kasan_mem_to_shadow_align_up((unsigned long)va + size);
 	kasan_populate_shadow(shadow_start, shadow_end, nid);
 }
 
 void __init kasan_init(void)
 {
+	unsigned long shadow_cea_begin, shadow_cea_end;
 	int i;
-	void *shadow_cea_begin, *shadow_cea_end;
 
 	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
 
@@ -372,16 +383,9 @@ void __init kasan_init(void)
 		map_range(&pfn_mapped[i]);
 	}
 
-	shadow_cea_begin = (void *)CPU_ENTRY_AREA_BASE;
-	shadow_cea_begin = kasan_mem_to_shadow(shadow_cea_begin);
-	shadow_cea_begin = (void *)round_down(
-			(unsigned long)shadow_cea_begin, PAGE_SIZE);
-
-	shadow_cea_end = (void *)(CPU_ENTRY_AREA_BASE +
-					CPU_ENTRY_AREA_MAP_SIZE);
-	shadow_cea_end = kasan_mem_to_shadow(shadow_cea_end);
-	shadow_cea_end = (void *)round_up(
-			(unsigned long)shadow_cea_end, PAGE_SIZE);
+	shadow_cea_begin = kasan_mem_to_shadow_align_down(CPU_ENTRY_AREA_BASE);
+	shadow_cea_end = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_BASE +
+						      CPU_ENTRY_AREA_MAP_SIZE);
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
@@ -403,9 +407,9 @@ void __init kasan_init(void)
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)VMALLOC_END + 1),
-		shadow_cea_begin);
+		(void *)shadow_cea_begin);
 
-	kasan_populate_early_shadow(shadow_cea_end,
+	kasan_populate_early_shadow((void *)shadow_cea_end,
 			kasan_mem_to_shadow((void *)__START_KERNEL_map));
 
 	kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext),

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names
  2022-11-10 20:35 ` [PATCH v2 3/5] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names Sean Christopherson
  2022-11-14 14:10   ` Andrey Ryabinin
@ 2022-11-15 22:26   ` tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-11-15 22:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Sean Christopherson, Peter Zijlstra (Intel),
	Andrey Ryabinin, x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     e93cc3aa893e74ea391e06aba09cc4bf523c12c8
Gitweb:        https://git.kernel.org/tip/e93cc3aa893e74ea391e06aba09cc4bf523c12c8
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:02 
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 15 Nov 2022 22:29:59 +01:00

x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names

Rename the CPU entry area variables in kasan_init() to shorten their
names, a future fix will reference the beginning of the per-CPU portion
of the CPU entry area, and shadow_cpu_entry_per_cpu_begin is a bit much.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Link: https://lkml.kernel.org/r/20221110203504.1985010-4-seanjc@google.com
---
 arch/x86/mm/kasan_init_64.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index d141692..ad7872a 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -331,7 +331,7 @@ void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
 void __init kasan_init(void)
 {
 	int i;
-	void *shadow_cpu_entry_begin, *shadow_cpu_entry_end;
+	void *shadow_cea_begin, *shadow_cea_end;
 
 	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
 
@@ -372,16 +372,16 @@ void __init kasan_init(void)
 		map_range(&pfn_mapped[i]);
 	}
 
-	shadow_cpu_entry_begin = (void *)CPU_ENTRY_AREA_BASE;
-	shadow_cpu_entry_begin = kasan_mem_to_shadow(shadow_cpu_entry_begin);
-	shadow_cpu_entry_begin = (void *)round_down(
-			(unsigned long)shadow_cpu_entry_begin, PAGE_SIZE);
+	shadow_cea_begin = (void *)CPU_ENTRY_AREA_BASE;
+	shadow_cea_begin = kasan_mem_to_shadow(shadow_cea_begin);
+	shadow_cea_begin = (void *)round_down(
+			(unsigned long)shadow_cea_begin, PAGE_SIZE);
 
-	shadow_cpu_entry_end = (void *)(CPU_ENTRY_AREA_BASE +
+	shadow_cea_end = (void *)(CPU_ENTRY_AREA_BASE +
 					CPU_ENTRY_AREA_MAP_SIZE);
-	shadow_cpu_entry_end = kasan_mem_to_shadow(shadow_cpu_entry_end);
-	shadow_cpu_entry_end = (void *)round_up(
-			(unsigned long)shadow_cpu_entry_end, PAGE_SIZE);
+	shadow_cea_end = kasan_mem_to_shadow(shadow_cea_end);
+	shadow_cea_end = (void *)round_up(
+			(unsigned long)shadow_cea_end, PAGE_SIZE);
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
@@ -403,9 +403,9 @@ void __init kasan_init(void)
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)VMALLOC_END + 1),
-		shadow_cpu_entry_begin);
+		shadow_cea_begin);
 
-	kasan_populate_early_shadow(shadow_cpu_entry_end,
+	kasan_populate_early_shadow(shadow_cea_end,
 			kasan_mem_to_shadow((void *)__START_KERNEL_map));
 
 	kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext),

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area
  2022-11-10 20:35 ` [PATCH v2 2/5] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area Sean Christopherson
  2022-11-14 14:10   ` Andrey Ryabinin
@ 2022-11-15 22:26   ` tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-11-15 22:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: syzbot+ffb4f000dc2872c93f62, Andrey Ryabinin,
	Sean Christopherson, Peter Zijlstra (Intel),
	x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     4917fc63dc646d6346f5d67ce8c10df874a6f4fe
Gitweb:        https://git.kernel.org/tip/4917fc63dc646d6346f5d67ce8c10df874a6f4fe
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:01 
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 15 Nov 2022 22:29:59 +01:00

x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area

Populate a KASAN shadow for the entire possible per-CPU range of the CPU
entry area instead of requiring that each individual chunk map a shadow.
Mapping shadows individually is error prone, e.g. the per-CPU GDT mapping
was left behind, which can lead to not-present page faults during KASAN
validation if the kernel performs a software lookup into the GDT.  The DS
buffer is also likely affected.

The motivation for mapping the per-CPU areas on-demand was to avoid
mapping the entire 512GiB range that's reserved for the CPU entry area,
shaving a few bytes by not creating shadows for potentially unused memory
was not a goal.

The bug is most easily reproduced by doing a sigreturn with a garbage
CS in the sigcontext, e.g.

  int main(void)
  {
    struct sigcontext regs;

    syscall(__NR_mmap, 0x1ffff000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul);
    syscall(__NR_mmap, 0x20000000ul, 0x1000000ul, 7ul, 0x32ul, -1, 0ul);
    syscall(__NR_mmap, 0x21000000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul);

    memset(&regs, 0, sizeof(regs));
    regs.cs = 0x1d0;
    syscall(__NR_rt_sigreturn);
    return 0;
  }

to coerce the kernel into doing a GDT lookup to compute CS.base when
reading the instruction bytes on the subsequent #GP to determine whether
or not the #GP is something the kernel should handle, e.g. to fixup UMIP
violations or to emulate CLI/STI for IOPL=3 applications.

  BUG: unable to handle page fault for address: fffffbc8379ace00
  #PF: supervisor read access in kernel mode
  #PF: error_code(0x0000) - not-present page
  PGD 16c03a067 P4D 16c03a067 PUD 15b990067 PMD 15b98f067 PTE 0
  Oops: 0000 [#1] PREEMPT SMP KASAN
  CPU: 3 PID: 851 Comm: r2 Not tainted 6.1.0-rc3-next-20221103+ #432
  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
  RIP: 0010:kasan_check_range+0xdf/0x190
  Call Trace:
   <TASK>
   get_desc+0xb0/0x1d0
   insn_get_seg_base+0x104/0x270
   insn_fetch_from_user+0x66/0x80
   fixup_umip_exception+0xb1/0x530
   exc_general_protection+0x181/0x210
   asm_exc_general_protection+0x22/0x30
  RIP: 0003:0x0
  Code: Unable to access opcode bytes at 0xffffffffffffffd6.
  RSP: 0003:0000000000000000 EFLAGS: 00000202
  RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000000001d0
  RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
  RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
  R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
  R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
   </TASK>

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Reported-by: syzbot+ffb4f000dc2872c93f62@syzkaller.appspotmail.com
Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Link: https://lkml.kernel.org/r/20221110203504.1985010-3-seanjc@google.com
---
 arch/x86/mm/cpu_entry_area.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index d831aae..7c855df 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -91,11 +91,6 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
 static void __init
 cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
 {
-	phys_addr_t pa = per_cpu_ptr_to_phys(ptr);
-
-	kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE,
-					early_pfn_to_nid(PFN_DOWN(pa)));
-
 	for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
 		cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
 }
@@ -195,6 +190,9 @@ static void __init setup_cpu_entry_area(unsigned int cpu)
 	pgprot_t tss_prot = PAGE_KERNEL;
 #endif
 
+	kasan_populate_shadow_for_vaddr(cea, CPU_ENTRY_AREA_SIZE,
+					early_cpu_to_node(cpu));
+
 	cea_set_pte(&cea->gdt, get_cpu_gdt_paddr(cpu), gdt_prot);
 
 	cea_map_percpu_pages(&cea->entry_stack_page,

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/mm: Recompute physical address for every page of per-CPU CEA mapping
  2022-11-10 20:35 ` [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping Sean Christopherson
  2022-11-14 14:09   ` Andrey Ryabinin
@ 2022-11-15 22:26   ` tip-bot2 for Sean Christopherson
  2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-11-15 22:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Sean Christopherson, Peter Zijlstra (Intel),
	Andrey Ryabinin, x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     991ab455645118e83fe9f2f9aea6ee383908ffd4
Gitweb:        https://git.kernel.org/tip/991ab455645118e83fe9f2f9aea6ee383908ffd4
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:00 
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 15 Nov 2022 22:29:58 +01:00

x86/mm: Recompute physical address for every page of per-CPU CEA mapping

Recompute the physical address for each per-CPU page in the CPU entry
area, a recent commit inadvertantly modified cea_map_percpu_pages() such
that every PTE is mapped to the physical address of the first page.

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Link: https://lkml.kernel.org/r/20221110203504.1985010-2-seanjc@google.com
---
 arch/x86/mm/cpu_entry_area.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index dff9001..d831aae 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
 					early_pfn_to_nid(PFN_DOWN(pa)));
 
 	for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
-		cea_set_pte(cea_vaddr, pa, prot);
+		cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
 }
 
 static void __init percpu_setup_debug_store(unsigned int cpu)

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names
  2022-11-10 20:35 ` [PATCH v2 3/5] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names Sean Christopherson
  2022-11-14 14:10   ` Andrey Ryabinin
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
@ 2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-12-17 18:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Sean Christopherson, Peter Zijlstra (Intel),
	Andrey Ryabinin, x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     7077d2ccb94dafd00b29cc2d601c9f6891648f5b
Gitweb:        https://git.kernel.org/tip/7077d2ccb94dafd00b29cc2d601c9f6891648f5b
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:02 
Committer:     Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Thu, 15 Dec 2022 10:37:28 -08:00

x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names

Rename the CPU entry area variables in kasan_init() to shorten their
names, a future fix will reference the beginning of the per-CPU portion
of the CPU entry area, and shadow_cpu_entry_per_cpu_begin is a bit much.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Link: https://lkml.kernel.org/r/20221110203504.1985010-4-seanjc@google.com
---
 arch/x86/mm/kasan_init_64.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index d141692..ad7872a 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -331,7 +331,7 @@ void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
 void __init kasan_init(void)
 {
 	int i;
-	void *shadow_cpu_entry_begin, *shadow_cpu_entry_end;
+	void *shadow_cea_begin, *shadow_cea_end;
 
 	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
 
@@ -372,16 +372,16 @@ void __init kasan_init(void)
 		map_range(&pfn_mapped[i]);
 	}
 
-	shadow_cpu_entry_begin = (void *)CPU_ENTRY_AREA_BASE;
-	shadow_cpu_entry_begin = kasan_mem_to_shadow(shadow_cpu_entry_begin);
-	shadow_cpu_entry_begin = (void *)round_down(
-			(unsigned long)shadow_cpu_entry_begin, PAGE_SIZE);
+	shadow_cea_begin = (void *)CPU_ENTRY_AREA_BASE;
+	shadow_cea_begin = kasan_mem_to_shadow(shadow_cea_begin);
+	shadow_cea_begin = (void *)round_down(
+			(unsigned long)shadow_cea_begin, PAGE_SIZE);
 
-	shadow_cpu_entry_end = (void *)(CPU_ENTRY_AREA_BASE +
+	shadow_cea_end = (void *)(CPU_ENTRY_AREA_BASE +
 					CPU_ENTRY_AREA_MAP_SIZE);
-	shadow_cpu_entry_end = kasan_mem_to_shadow(shadow_cpu_entry_end);
-	shadow_cpu_entry_end = (void *)round_up(
-			(unsigned long)shadow_cpu_entry_end, PAGE_SIZE);
+	shadow_cea_end = kasan_mem_to_shadow(shadow_cea_end);
+	shadow_cea_end = (void *)round_up(
+			(unsigned long)shadow_cea_end, PAGE_SIZE);
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
@@ -403,9 +403,9 @@ void __init kasan_init(void)
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)VMALLOC_END + 1),
-		shadow_cpu_entry_begin);
+		shadow_cea_begin);
 
-	kasan_populate_early_shadow(shadow_cpu_entry_end,
+	kasan_populate_early_shadow(shadow_cea_end,
 			kasan_mem_to_shadow((void *)__START_KERNEL_map));
 
 	kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext),

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/kasan: Add helpers to align shadow addresses up and down
  2022-11-10 20:35 ` [PATCH v2 4/5] x86/kasan: Add helpers to align shadow addresses up and down Sean Christopherson
  2022-11-14 14:13   ` Andrey Ryabinin
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
@ 2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-12-17 18:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Sean Christopherson, Peter Zijlstra (Intel),
	Andrey Ryabinin, x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     bde258d97409f2a45243cb393a55ea9ecfc7aba5
Gitweb:        https://git.kernel.org/tip/bde258d97409f2a45243cb393a55ea9ecfc7aba5
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:03 
Committer:     Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Thu, 15 Dec 2022 10:37:28 -08:00

x86/kasan: Add helpers to align shadow addresses up and down

Add helpers to dedup code for aligning shadow address up/down to page
boundaries when translating an address to its shadow.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Link: https://lkml.kernel.org/r/20221110203504.1985010-5-seanjc@google.com
---
 arch/x86/mm/kasan_init_64.c | 40 +++++++++++++++++++-----------------
 1 file changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index ad7872a..afc5e12 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -316,22 +316,33 @@ void __init kasan_early_init(void)
 	kasan_map_early_shadow(init_top_pgt);
 }
 
+static unsigned long kasan_mem_to_shadow_align_down(unsigned long va)
+{
+	unsigned long shadow = (unsigned long)kasan_mem_to_shadow((void *)va);
+
+	return round_down(shadow, PAGE_SIZE);
+}
+
+static unsigned long kasan_mem_to_shadow_align_up(unsigned long va)
+{
+	unsigned long shadow = (unsigned long)kasan_mem_to_shadow((void *)va);
+
+	return round_up(shadow, PAGE_SIZE);
+}
+
 void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
 {
 	unsigned long shadow_start, shadow_end;
 
-	shadow_start = (unsigned long)kasan_mem_to_shadow(va);
-	shadow_start = round_down(shadow_start, PAGE_SIZE);
-	shadow_end = (unsigned long)kasan_mem_to_shadow(va + size);
-	shadow_end = round_up(shadow_end, PAGE_SIZE);
-
+	shadow_start = kasan_mem_to_shadow_align_down((unsigned long)va);
+	shadow_end = kasan_mem_to_shadow_align_up((unsigned long)va + size);
 	kasan_populate_shadow(shadow_start, shadow_end, nid);
 }
 
 void __init kasan_init(void)
 {
+	unsigned long shadow_cea_begin, shadow_cea_end;
 	int i;
-	void *shadow_cea_begin, *shadow_cea_end;
 
 	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
 
@@ -372,16 +383,9 @@ void __init kasan_init(void)
 		map_range(&pfn_mapped[i]);
 	}
 
-	shadow_cea_begin = (void *)CPU_ENTRY_AREA_BASE;
-	shadow_cea_begin = kasan_mem_to_shadow(shadow_cea_begin);
-	shadow_cea_begin = (void *)round_down(
-			(unsigned long)shadow_cea_begin, PAGE_SIZE);
-
-	shadow_cea_end = (void *)(CPU_ENTRY_AREA_BASE +
-					CPU_ENTRY_AREA_MAP_SIZE);
-	shadow_cea_end = kasan_mem_to_shadow(shadow_cea_end);
-	shadow_cea_end = (void *)round_up(
-			(unsigned long)shadow_cea_end, PAGE_SIZE);
+	shadow_cea_begin = kasan_mem_to_shadow_align_down(CPU_ENTRY_AREA_BASE);
+	shadow_cea_end = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_BASE +
+						      CPU_ENTRY_AREA_MAP_SIZE);
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
@@ -403,9 +407,9 @@ void __init kasan_init(void)
 
 	kasan_populate_early_shadow(
 		kasan_mem_to_shadow((void *)VMALLOC_END + 1),
-		shadow_cea_begin);
+		(void *)shadow_cea_begin);
 
-	kasan_populate_early_shadow(shadow_cea_end,
+	kasan_populate_early_shadow((void *)shadow_cea_end,
 			kasan_mem_to_shadow((void *)__START_KERNEL_map));
 
 	kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext),

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/kasan: Populate shadow for shared chunk of the CPU entry area
  2022-11-10 20:35 ` [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area Sean Christopherson
  2022-11-14 14:44   ` Andrey Ryabinin
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
@ 2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-12-17 18:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: syzbot+8cdd16fd5a6c0565e227, Sean Christopherson,
	Peter Zijlstra (Intel),
	x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     1cfaac2400c73378e78182a706be0f3ac8b93cd7
Gitweb:        https://git.kernel.org/tip/1cfaac2400c73378e78182a706be0f3ac8b93cd7
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:04 
Committer:     Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Thu, 15 Dec 2022 10:37:28 -08:00

x86/kasan: Populate shadow for shared chunk of the CPU entry area

Popuplate the shadow for the shared portion of the CPU entry area, i.e.
the read-only IDT mapping, during KASAN initialization.  A recent change
modified KASAN to map the per-CPU areas on-demand, but forgot to keep a
shadow for the common area that is shared amongst all CPUs.

Map the common area in KASAN init instead of letting idt_map_in_cea() do
the dirty work so that it Just Works in the unlikely event more shared
data is shoved into the CPU entry area.

The bug manifests as a not-present #PF when software attempts to lookup
an IDT entry, e.g. when KVM is handling IRQs on Intel CPUs (KVM performs
direct CALL to the IRQ handler to avoid the overhead of INTn):

 BUG: unable to handle page fault for address: fffffbc0000001d8
 #PF: supervisor read access in kernel mode
 #PF: error_code(0x0000) - not-present page
 PGD 16c03a067 P4D 16c03a067 PUD 0
 Oops: 0000 [#1] PREEMPT SMP KASAN
 CPU: 5 PID: 901 Comm: repro Tainted: G        W          6.1.0-rc3+ #410
 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
 RIP: 0010:kasan_check_range+0xdf/0x190
  vmx_handle_exit_irqoff+0x152/0x290 [kvm_intel]
  vcpu_run+0x1d89/0x2bd0 [kvm]
  kvm_arch_vcpu_ioctl_run+0x3ce/0xa70 [kvm]
  kvm_vcpu_ioctl+0x349/0x900 [kvm]
  __x64_sys_ioctl+0xb8/0xf0
  do_syscall_64+0x2b/0x50
  entry_SYSCALL_64_after_hwframe+0x46/0xb0

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Reported-by: syzbot+8cdd16fd5a6c0565e227@syzkaller.appspotmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20221110203504.1985010-6-seanjc@google.com
---
 arch/x86/mm/kasan_init_64.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index afc5e12..0302491 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -341,7 +341,7 @@ void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
 
 void __init kasan_init(void)
 {
-	unsigned long shadow_cea_begin, shadow_cea_end;
+	unsigned long shadow_cea_begin, shadow_cea_per_cpu_begin, shadow_cea_end;
 	int i;
 
 	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
@@ -384,6 +384,7 @@ void __init kasan_init(void)
 	}
 
 	shadow_cea_begin = kasan_mem_to_shadow_align_down(CPU_ENTRY_AREA_BASE);
+	shadow_cea_per_cpu_begin = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_PER_CPU);
 	shadow_cea_end = kasan_mem_to_shadow_align_up(CPU_ENTRY_AREA_BASE +
 						      CPU_ENTRY_AREA_MAP_SIZE);
 
@@ -409,6 +410,15 @@ void __init kasan_init(void)
 		kasan_mem_to_shadow((void *)VMALLOC_END + 1),
 		(void *)shadow_cea_begin);
 
+	/*
+	 * Populate the shadow for the shared portion of the CPU entry area.
+	 * Shadows for the per-CPU areas are mapped on-demand, as each CPU's
+	 * area is randomly placed somewhere in the 512GiB range and mapping
+	 * the entire 512GiB range is prohibitively expensive.
+	 */
+	kasan_populate_shadow(shadow_cea_begin,
+			      shadow_cea_per_cpu_begin, 0);
+
 	kasan_populate_early_shadow((void *)shadow_cea_end,
 			kasan_mem_to_shadow((void *)__START_KERNEL_map));
 

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area
  2022-11-10 20:35 ` [PATCH v2 2/5] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area Sean Christopherson
  2022-11-14 14:10   ` Andrey Ryabinin
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
@ 2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-12-17 18:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: syzbot+ffb4f000dc2872c93f62, Andrey Ryabinin,
	Sean Christopherson, Peter Zijlstra (Intel),
	x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     97650148a15e0b30099d6175ffe278b9f55ec66a
Gitweb:        https://git.kernel.org/tip/97650148a15e0b30099d6175ffe278b9f55ec66a
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:01 
Committer:     Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Thu, 15 Dec 2022 10:37:28 -08:00

x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area

Populate a KASAN shadow for the entire possible per-CPU range of the CPU
entry area instead of requiring that each individual chunk map a shadow.
Mapping shadows individually is error prone, e.g. the per-CPU GDT mapping
was left behind, which can lead to not-present page faults during KASAN
validation if the kernel performs a software lookup into the GDT.  The DS
buffer is also likely affected.

The motivation for mapping the per-CPU areas on-demand was to avoid
mapping the entire 512GiB range that's reserved for the CPU entry area,
shaving a few bytes by not creating shadows for potentially unused memory
was not a goal.

The bug is most easily reproduced by doing a sigreturn with a garbage
CS in the sigcontext, e.g.

  int main(void)
  {
    struct sigcontext regs;

    syscall(__NR_mmap, 0x1ffff000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul);
    syscall(__NR_mmap, 0x20000000ul, 0x1000000ul, 7ul, 0x32ul, -1, 0ul);
    syscall(__NR_mmap, 0x21000000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul);

    memset(&regs, 0, sizeof(regs));
    regs.cs = 0x1d0;
    syscall(__NR_rt_sigreturn);
    return 0;
  }

to coerce the kernel into doing a GDT lookup to compute CS.base when
reading the instruction bytes on the subsequent #GP to determine whether
or not the #GP is something the kernel should handle, e.g. to fixup UMIP
violations or to emulate CLI/STI for IOPL=3 applications.

  BUG: unable to handle page fault for address: fffffbc8379ace00
  #PF: supervisor read access in kernel mode
  #PF: error_code(0x0000) - not-present page
  PGD 16c03a067 P4D 16c03a067 PUD 15b990067 PMD 15b98f067 PTE 0
  Oops: 0000 [#1] PREEMPT SMP KASAN
  CPU: 3 PID: 851 Comm: r2 Not tainted 6.1.0-rc3-next-20221103+ #432
  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
  RIP: 0010:kasan_check_range+0xdf/0x190
  Call Trace:
   <TASK>
   get_desc+0xb0/0x1d0
   insn_get_seg_base+0x104/0x270
   insn_fetch_from_user+0x66/0x80
   fixup_umip_exception+0xb1/0x530
   exc_general_protection+0x181/0x210
   asm_exc_general_protection+0x22/0x30
  RIP: 0003:0x0
  Code: Unable to access opcode bytes at 0xffffffffffffffd6.
  RSP: 0003:0000000000000000 EFLAGS: 00000202
  RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000000001d0
  RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
  RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
  R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
  R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
   </TASK>

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Reported-by: syzbot+ffb4f000dc2872c93f62@syzkaller.appspotmail.com
Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Link: https://lkml.kernel.org/r/20221110203504.1985010-3-seanjc@google.com
---
 arch/x86/mm/cpu_entry_area.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index d831aae..7c855df 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -91,11 +91,6 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
 static void __init
 cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
 {
-	phys_addr_t pa = per_cpu_ptr_to_phys(ptr);
-
-	kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE,
-					early_pfn_to_nid(PFN_DOWN(pa)));
-
 	for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
 		cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
 }
@@ -195,6 +190,9 @@ static void __init setup_cpu_entry_area(unsigned int cpu)
 	pgprot_t tss_prot = PAGE_KERNEL;
 #endif
 
+	kasan_populate_shadow_for_vaddr(cea, CPU_ENTRY_AREA_SIZE,
+					early_cpu_to_node(cpu));
+
 	cea_set_pte(&cea->gdt, get_cpu_gdt_paddr(cpu), gdt_prot);
 
 	cea_map_percpu_pages(&cea->entry_stack_page,

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip: x86/mm] x86/mm: Recompute physical address for every page of per-CPU CEA mapping
  2022-11-10 20:35 ` [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping Sean Christopherson
  2022-11-14 14:09   ` Andrey Ryabinin
  2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
@ 2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot2 for Sean Christopherson @ 2022-12-17 18:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Sean Christopherson, Peter Zijlstra (Intel),
	Andrey Ryabinin, x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     80d72a8f76e8f3f0b5a70b8c7022578e17bde8e7
Gitweb:        https://git.kernel.org/tip/80d72a8f76e8f3f0b5a70b8c7022578e17bde8e7
Author:        Sean Christopherson <seanjc@google.com>
AuthorDate:    Thu, 10 Nov 2022 20:35:00 
Committer:     Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Thu, 15 Dec 2022 10:37:28 -08:00

x86/mm: Recompute physical address for every page of per-CPU CEA mapping

Recompute the physical address for each per-CPU page in the CPU entry
area, a recent commit inadvertantly modified cea_map_percpu_pages() such
that every PTE is mapped to the physical address of the first page.

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Link: https://lkml.kernel.org/r/20221110203504.1985010-2-seanjc@google.com
---
 arch/x86/mm/cpu_entry_area.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index dff9001..d831aae 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
 					early_pfn_to_nid(PFN_DOWN(pa)));
 
 	for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
-		cea_set_pte(cea_vaddr, pa, prot);
+		cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
 }
 
 static void __init percpu_setup_debug_store(unsigned int cpu)

^ permalink raw reply related	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2022-12-17 18:56 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-10 20:34 [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Sean Christopherson
2022-11-10 20:35 ` [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping Sean Christopherson
2022-11-14 14:09   ` Andrey Ryabinin
2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
2022-11-10 20:35 ` [PATCH v2 2/5] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area Sean Christopherson
2022-11-14 14:10   ` Andrey Ryabinin
2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
2022-11-10 20:35 ` [PATCH v2 3/5] x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names Sean Christopherson
2022-11-14 14:10   ` Andrey Ryabinin
2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
2022-11-10 20:35 ` [PATCH v2 4/5] x86/kasan: Add helpers to align shadow addresses up and down Sean Christopherson
2022-11-14 14:13   ` Andrey Ryabinin
2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
2022-11-10 20:35 ` [PATCH v2 5/5] x86/kasan: Populate shadow for shared chunk of the CPU entry area Sean Christopherson
2022-11-14 14:44   ` Andrey Ryabinin
2022-11-14 15:12     ` Peter Zijlstra
2022-11-14 17:53       ` Sean Christopherson
2022-11-14 21:46         ` Peter Zijlstra
2022-11-15 22:26   ` [tip: x86/mm] " tip-bot2 for Sean Christopherson
2022-12-17 18:55   ` tip-bot2 for Sean Christopherson
2022-11-14 11:57 ` [PATCH v2 0/5] x86/kasan: Bug fixes for recent CEA changes Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.