linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@kernel.org>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Borislav Petkov <bp@alien8.de>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: [GIT PULL] x86 fixes
Date: Wed, 17 Jan 2018 16:41:34 +0100	[thread overview]
Message-ID: <20180117154134.bgocrlokyobeyfyu@gmail.com> (raw)

Linus,

Please pull the latest x86-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86-urgent-for-linus

   # HEAD: 45d55e7bac4028af93f5fa324e69958a0b868e96 x86/apic/vector: Fix off by one in error path

Misc fixes:

 - A rather involved set of memory hardware encryption fixes to support the early 
   loading of microcode files via the initrd. These are larger than what we 
   normally take at such a late -rc stage, but there are two mitigating factors: 
   1) much of the changes are limited to the SME code itself 2) being able to 
   early load microcode has increased importance in the post-Meltdown/Spectre era.

 - An IRQ vector allocator fix

 - An Intel RDT driver use-after-free fix

 - An APIC driver bug fix/revert to make certain older systems boot again

 - A pkeys ABI fix

 - TSC calibration fixes

 - A kdump fix

  out-of-topic modifications in x86-urgent-for-linus:
  -----------------------------------------------------
  include/linux/crash_core.h         # 9f15b9120f56: kdump: Write the correct add
  kernel/crash_core.c                # 9f15b9120f56: kdump: Write the correct add

 Thanks,

	Ingo

------------------>
Andi Kleen (1):
      x86/idt: Mark IDT tables __initconst

Eric W. Biederman (1):
      x86/mm/pkeys: Fix fill_sig_info_pkey

Kirill A. Shutemov (1):
      kdump: Write the correct address of mem_section into vmcoreinfo

Len Brown (3):
      x86/tsc: Future-proof native_calibrate_tsc()
      x86/tsc: Fix erroneous TSC rate on Skylake Xeon
      x86/tsc: Print tsc_khz, when it differs from cpu_khz

Thomas Gleixner (2):
      x86/intel_rdt/cqm: Prevent use after free
      x86/apic/vector: Fix off by one in error path

Tom Lendacky (5):
      x86/mm: Clean up register saving in the __enc_copy() assembly code
      x86/mm: Use a struct to reduce parameters for SME PGD mapping
      x86/mm: Centralize PMD flags in sme_encrypt_kernel()
      x86/mm: Prepare sme_encrypt_kernel() for PAGE aligned encryption
      x86/mm: Encrypt the initrd earlier for BSP microcode update

Ville Syrjälä (1):
      Revert "x86/apic: Remove init_bsp_APIC()"


 arch/x86/include/asm/apic.h        |   1 +
 arch/x86/include/asm/mem_encrypt.h |   4 +-
 arch/x86/kernel/apic/apic.c        |  49 +++++
 arch/x86/kernel/apic/vector.c      |   7 +-
 arch/x86/kernel/cpu/intel_rdt.c    |   8 +-
 arch/x86/kernel/head64.c           |   4 +-
 arch/x86/kernel/idt.c              |  12 +-
 arch/x86/kernel/irqinit.c          |   3 +
 arch/x86/kernel/setup.c            |  10 --
 arch/x86/kernel/tsc.c              |   9 +-
 arch/x86/mm/fault.c                |   7 +-
 arch/x86/mm/mem_encrypt.c          | 356 +++++++++++++++++++++++++++----------
 arch/x86/mm/mem_encrypt_boot.S     |  80 +++++----
 include/linux/crash_core.h         |   2 +
 kernel/crash_core.c                |   2 +-
 15 files changed, 391 insertions(+), 163 deletions(-)

diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
index a9e57f08bfa6..98722773391d 100644
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -136,6 +136,7 @@ extern void disconnect_bsp_APIC(int virt_wire_setup);
 extern void disable_local_APIC(void);
 extern void lapic_shutdown(void);
 extern void sync_Arb_IDs(void);
+extern void init_bsp_APIC(void);
 extern void apic_intr_mode_init(void);
 extern void setup_local_APIC(void);
 extern void init_apic_mappings(void);
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index c9459a4c3c68..22c5f3e6f820 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -39,7 +39,7 @@ void __init sme_unmap_bootdata(char *real_mode_data);
 
 void __init sme_early_init(void);
 
-void __init sme_encrypt_kernel(void);
+void __init sme_encrypt_kernel(struct boot_params *bp);
 void __init sme_enable(struct boot_params *bp);
 
 int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size);
@@ -67,7 +67,7 @@ static inline void __init sme_unmap_bootdata(char *real_mode_data) { }
 
 static inline void __init sme_early_init(void) { }
 
-static inline void __init sme_encrypt_kernel(void) { }
+static inline void __init sme_encrypt_kernel(struct boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline bool sme_active(void) { return false; }
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index 880441f24146..25ddf02598d2 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1286,6 +1286,55 @@ static int __init apic_intr_mode_select(void)
 	return APIC_SYMMETRIC_IO;
 }
 
+/*
+ * An initial setup of the virtual wire mode.
+ */
+void __init init_bsp_APIC(void)
+{
+	unsigned int value;
+
+	/*
+	 * Don't do the setup now if we have a SMP BIOS as the
+	 * through-I/O-APIC virtual wire mode might be active.
+	 */
+	if (smp_found_config || !boot_cpu_has(X86_FEATURE_APIC))
+		return;
+
+	/*
+	 * Do not trust the local APIC being empty at bootup.
+	 */
+	clear_local_APIC();
+
+	/*
+	 * Enable APIC.
+	 */
+	value = apic_read(APIC_SPIV);
+	value &= ~APIC_VECTOR_MASK;
+	value |= APIC_SPIV_APIC_ENABLED;
+
+#ifdef CONFIG_X86_32
+	/* This bit is reserved on P4/Xeon and should be cleared */
+	if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
+	    (boot_cpu_data.x86 == 15))
+		value &= ~APIC_SPIV_FOCUS_DISABLED;
+	else
+#endif
+		value |= APIC_SPIV_FOCUS_DISABLED;
+	value |= SPURIOUS_APIC_VECTOR;
+	apic_write(APIC_SPIV, value);
+
+	/*
+	 * Set up the virtual wire mode.
+	 */
+	apic_write(APIC_LVT0, APIC_DM_EXTINT);
+	value = APIC_DM_NMI;
+	if (!lapic_is_integrated())		/* 82489DX */
+		value |= APIC_LVT_LEVEL_TRIGGER;
+	if (apic_extnmi == APIC_EXTNMI_NONE)
+		value |= APIC_LVT_MASKED;
+	apic_write(APIC_LVT1, value);
+}
+
 /* Init the interrupt delivery mode for the BSP */
 void __init apic_intr_mode_init(void)
 {
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
index f8b03bb8e725..3cc471beb50b 100644
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -542,14 +542,17 @@ static int x86_vector_alloc_irqs(struct irq_domain *domain, unsigned int virq,
 
 		err = assign_irq_vector_policy(irqd, info);
 		trace_vector_setup(virq + i, false, err);
-		if (err)
+		if (err) {
+			irqd->chip_data = NULL;
+			free_apic_chip_data(apicd);
 			goto error;
+		}
 	}
 
 	return 0;
 
 error:
-	x86_vector_free_irqs(domain, virq, i + 1);
+	x86_vector_free_irqs(domain, virq, i);
 	return err;
 }
 
diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index 88dcf8479013..99442370de40 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -525,10 +525,6 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
 		 */
 		if (static_branch_unlikely(&rdt_mon_enable_key))
 			rmdir_mondata_subdir_allrdtgrp(r, d->id);
-		kfree(d->ctrl_val);
-		kfree(d->rmid_busy_llc);
-		kfree(d->mbm_total);
-		kfree(d->mbm_local);
 		list_del(&d->list);
 		if (is_mbm_enabled())
 			cancel_delayed_work(&d->mbm_over);
@@ -545,6 +541,10 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
 			cancel_delayed_work(&d->cqm_limbo);
 		}
 
+		kfree(d->ctrl_val);
+		kfree(d->rmid_busy_llc);
+		kfree(d->mbm_total);
+		kfree(d->mbm_local);
 		kfree(d);
 		return;
 	}
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 6a5d757b9cfd..7ba5d819ebe3 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -157,8 +157,8 @@ unsigned long __head __startup_64(unsigned long physaddr,
 	p = fixup_pointer(&phys_base, physaddr);
 	*p += load_delta - sme_get_me_mask();
 
-	/* Encrypt the kernel (if SME is active) */
-	sme_encrypt_kernel();
+	/* Encrypt the kernel and related (if SME is active) */
+	sme_encrypt_kernel(bp);
 
 	/*
 	 * Return the SME encryption mask (if SME is active) to be used as a
diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
index d985cef3984f..56d99be3706a 100644
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -56,7 +56,7 @@ struct idt_data {
  * Early traps running on the DEFAULT_STACK because the other interrupt
  * stacks work only after cpu_init().
  */
-static const __initdata struct idt_data early_idts[] = {
+static const __initconst struct idt_data early_idts[] = {
 	INTG(X86_TRAP_DB,		debug),
 	SYSG(X86_TRAP_BP,		int3),
 #ifdef CONFIG_X86_32
@@ -70,7 +70,7 @@ static const __initdata struct idt_data early_idts[] = {
  * the traps which use them are reinitialized with IST after cpu_init() has
  * set up TSS.
  */
-static const __initdata struct idt_data def_idts[] = {
+static const __initconst struct idt_data def_idts[] = {
 	INTG(X86_TRAP_DE,		divide_error),
 	INTG(X86_TRAP_NMI,		nmi),
 	INTG(X86_TRAP_BR,		bounds),
@@ -108,7 +108,7 @@ static const __initdata struct idt_data def_idts[] = {
 /*
  * The APIC and SMP idt entries
  */
-static const __initdata struct idt_data apic_idts[] = {
+static const __initconst struct idt_data apic_idts[] = {
 #ifdef CONFIG_SMP
 	INTG(RESCHEDULE_VECTOR,		reschedule_interrupt),
 	INTG(CALL_FUNCTION_VECTOR,	call_function_interrupt),
@@ -150,7 +150,7 @@ static const __initdata struct idt_data apic_idts[] = {
  * Early traps running on the DEFAULT_STACK because the other interrupt
  * stacks work only after cpu_init().
  */
-static const __initdata struct idt_data early_pf_idts[] = {
+static const __initconst struct idt_data early_pf_idts[] = {
 	INTG(X86_TRAP_PF,		page_fault),
 };
 
@@ -158,7 +158,7 @@ static const __initdata struct idt_data early_pf_idts[] = {
  * Override for the debug_idt. Same as the default, but with interrupt
  * stack set to DEFAULT_STACK (0). Required for NMI trap handling.
  */
-static const __initdata struct idt_data dbg_idts[] = {
+static const __initconst struct idt_data dbg_idts[] = {
 	INTG(X86_TRAP_DB,	debug),
 	INTG(X86_TRAP_BP,	int3),
 };
@@ -180,7 +180,7 @@ gate_desc debug_idt_table[IDT_ENTRIES] __page_aligned_bss;
  * The exceptions which use Interrupt stacks. They are setup after
  * cpu_init() when the TSS has been initialized.
  */
-static const __initdata struct idt_data ist_idts[] = {
+static const __initconst struct idt_data ist_idts[] = {
 	ISTG(X86_TRAP_DB,	debug,		DEBUG_STACK),
 	ISTG(X86_TRAP_NMI,	nmi,		NMI_STACK),
 	SISTG(X86_TRAP_BP,	int3,		DEBUG_STACK),
diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
index 8da3e909e967..a539410c4ea9 100644
--- a/arch/x86/kernel/irqinit.c
+++ b/arch/x86/kernel/irqinit.c
@@ -61,6 +61,9 @@ void __init init_ISA_irqs(void)
 	struct irq_chip *chip = legacy_pic->chip;
 	int i;
 
+#if defined(CONFIG_X86_64) || defined(CONFIG_X86_LOCAL_APIC)
+	init_bsp_APIC();
+#endif
 	legacy_pic->init(0);
 
 	for (i = 0; i < nr_legacy_irqs(); i++)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 145810b0edf6..68d7ab81c62f 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -364,16 +364,6 @@ static void __init reserve_initrd(void)
 	    !ramdisk_image || !ramdisk_size)
 		return;		/* No initrd provided by bootloader */
 
-	/*
-	 * If SME is active, this memory will be marked encrypted by the
-	 * kernel when it is accessed (including relocation). However, the
-	 * ramdisk image was loaded decrypted by the bootloader, so make
-	 * sure that it is encrypted before accessing it. For SEV the
-	 * ramdisk will already be encrypted, so only do this for SME.
-	 */
-	if (sme_active())
-		sme_early_encrypt(ramdisk_image, ramdisk_end - ramdisk_image);
-
 	initrd_start = 0;
 
 	mapped_size = memblock_mem_size(max_pfn_mapped);
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 8ea117f8142e..e169e85db434 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -602,7 +602,6 @@ unsigned long native_calibrate_tsc(void)
 		case INTEL_FAM6_KABYLAKE_DESKTOP:
 			crystal_khz = 24000;	/* 24.0 MHz */
 			break;
-		case INTEL_FAM6_SKYLAKE_X:
 		case INTEL_FAM6_ATOM_DENVERTON:
 			crystal_khz = 25000;	/* 25.0 MHz */
 			break;
@@ -612,6 +611,8 @@ unsigned long native_calibrate_tsc(void)
 		}
 	}
 
+	if (crystal_khz == 0)
+		return 0;
 	/*
 	 * TSC frequency determined by CPUID is a "hardware reported"
 	 * frequency and is the most accurate one so far we have. This
@@ -1315,6 +1316,12 @@ void __init tsc_init(void)
 		(unsigned long)cpu_khz / 1000,
 		(unsigned long)cpu_khz % 1000);
 
+	if (cpu_khz != tsc_khz) {
+		pr_info("Detected %lu.%03lu MHz TSC",
+			(unsigned long)tsc_khz / 1000,
+			(unsigned long)tsc_khz % 1000);
+	}
+
 	/* Sanitize TSC ADJUST before cyc2ns gets initialized */
 	tsc_store_and_check_tsc_adjust(true);
 
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 06fe3d51d385..b3e40773dce0 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -172,14 +172,15 @@ is_prefetch(struct pt_regs *regs, unsigned long error_code, unsigned long addr)
  * 6. T1   : reaches here, sees vma_pkey(vma)=5, when we really
  *	     faulted on a pte with its pkey=4.
  */
-static void fill_sig_info_pkey(int si_code, siginfo_t *info, u32 *pkey)
+static void fill_sig_info_pkey(int si_signo, int si_code, siginfo_t *info,
+		u32 *pkey)
 {
 	/* This is effectively an #ifdef */
 	if (!boot_cpu_has(X86_FEATURE_OSPKE))
 		return;
 
 	/* Fault not from Protection Keys: nothing to do */
-	if (si_code != SEGV_PKUERR)
+	if ((si_code != SEGV_PKUERR) || (si_signo != SIGSEGV))
 		return;
 	/*
 	 * force_sig_info_fault() is called from a number of
@@ -218,7 +219,7 @@ force_sig_info_fault(int si_signo, int si_code, unsigned long address,
 		lsb = PAGE_SHIFT;
 	info.si_addr_lsb = lsb;
 
-	fill_sig_info_pkey(si_code, &info, pkey);
+	fill_sig_info_pkey(si_signo, si_code, &info, pkey);
 
 	force_sig_info(si_signo, &info, tsk);
 }
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 391b13402e40..3ef362f598e3 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -464,37 +464,62 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long size)
 	set_memory_decrypted((unsigned long)vaddr, size >> PAGE_SHIFT);
 }
 
-static void __init sme_clear_pgd(pgd_t *pgd_base, unsigned long start,
-				 unsigned long end)
+struct sme_populate_pgd_data {
+	void	*pgtable_area;
+	pgd_t	*pgd;
+
+	pmdval_t pmd_flags;
+	pteval_t pte_flags;
+	unsigned long paddr;
+
+	unsigned long vaddr;
+	unsigned long vaddr_end;
+};
+
+static void __init sme_clear_pgd(struct sme_populate_pgd_data *ppd)
 {
 	unsigned long pgd_start, pgd_end, pgd_size;
 	pgd_t *pgd_p;
 
-	pgd_start = start & PGDIR_MASK;
-	pgd_end = end & PGDIR_MASK;
+	pgd_start = ppd->vaddr & PGDIR_MASK;
+	pgd_end = ppd->vaddr_end & PGDIR_MASK;
 
-	pgd_size = (((pgd_end - pgd_start) / PGDIR_SIZE) + 1);
-	pgd_size *= sizeof(pgd_t);
+	pgd_size = (((pgd_end - pgd_start) / PGDIR_SIZE) + 1) * sizeof(pgd_t);
 
-	pgd_p = pgd_base + pgd_index(start);
+	pgd_p = ppd->pgd + pgd_index(ppd->vaddr);
 
 	memset(pgd_p, 0, pgd_size);
 }
 
-#define PGD_FLAGS	_KERNPG_TABLE_NOENC
-#define P4D_FLAGS	_KERNPG_TABLE_NOENC
-#define PUD_FLAGS	_KERNPG_TABLE_NOENC
-#define PMD_FLAGS	(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL)
+#define PGD_FLAGS		_KERNPG_TABLE_NOENC
+#define P4D_FLAGS		_KERNPG_TABLE_NOENC
+#define PUD_FLAGS		_KERNPG_TABLE_NOENC
+#define PMD_FLAGS		_KERNPG_TABLE_NOENC
+
+#define PMD_FLAGS_LARGE		(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL)
+
+#define PMD_FLAGS_DEC		PMD_FLAGS_LARGE
+#define PMD_FLAGS_DEC_WP	((PMD_FLAGS_DEC & ~_PAGE_CACHE_MASK) | \
+				 (_PAGE_PAT | _PAGE_PWT))
+
+#define PMD_FLAGS_ENC		(PMD_FLAGS_LARGE | _PAGE_ENC)
+
+#define PTE_FLAGS		(__PAGE_KERNEL_EXEC & ~_PAGE_GLOBAL)
+
+#define PTE_FLAGS_DEC		PTE_FLAGS
+#define PTE_FLAGS_DEC_WP	((PTE_FLAGS_DEC & ~_PAGE_CACHE_MASK) | \
+				 (_PAGE_PAT | _PAGE_PWT))
+
+#define PTE_FLAGS_ENC		(PTE_FLAGS | _PAGE_ENC)
 
-static void __init *sme_populate_pgd(pgd_t *pgd_base, void *pgtable_area,
-				     unsigned long vaddr, pmdval_t pmd_val)
+static pmd_t __init *sme_prepare_pgd(struct sme_populate_pgd_data *ppd)
 {
 	pgd_t *pgd_p;
 	p4d_t *p4d_p;
 	pud_t *pud_p;
 	pmd_t *pmd_p;
 
-	pgd_p = pgd_base + pgd_index(vaddr);
+	pgd_p = ppd->pgd + pgd_index(ppd->vaddr);
 	if (native_pgd_val(*pgd_p)) {
 		if (IS_ENABLED(CONFIG_X86_5LEVEL))
 			p4d_p = (p4d_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
@@ -504,15 +529,15 @@ static void __init *sme_populate_pgd(pgd_t *pgd_base, void *pgtable_area,
 		pgd_t pgd;
 
 		if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-			p4d_p = pgtable_area;
+			p4d_p = ppd->pgtable_area;
 			memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D);
-			pgtable_area += sizeof(*p4d_p) * PTRS_PER_P4D;
+			ppd->pgtable_area += sizeof(*p4d_p) * PTRS_PER_P4D;
 
 			pgd = native_make_pgd((pgdval_t)p4d_p + PGD_FLAGS);
 		} else {
-			pud_p = pgtable_area;
+			pud_p = ppd->pgtable_area;
 			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
+			ppd->pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
 
 			pgd = native_make_pgd((pgdval_t)pud_p + PGD_FLAGS);
 		}
@@ -520,58 +545,160 @@ static void __init *sme_populate_pgd(pgd_t *pgd_base, void *pgtable_area,
 	}
 
 	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-		p4d_p += p4d_index(vaddr);
+		p4d_p += p4d_index(ppd->vaddr);
 		if (native_p4d_val(*p4d_p)) {
 			pud_p = (pud_t *)(native_p4d_val(*p4d_p) & ~PTE_FLAGS_MASK);
 		} else {
 			p4d_t p4d;
 
-			pud_p = pgtable_area;
+			pud_p = ppd->pgtable_area;
 			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
+			ppd->pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
 
 			p4d = native_make_p4d((pudval_t)pud_p + P4D_FLAGS);
 			native_set_p4d(p4d_p, p4d);
 		}
 	}
 
-	pud_p += pud_index(vaddr);
+	pud_p += pud_index(ppd->vaddr);
 	if (native_pud_val(*pud_p)) {
 		if (native_pud_val(*pud_p) & _PAGE_PSE)
-			goto out;
+			return NULL;
 
 		pmd_p = (pmd_t *)(native_pud_val(*pud_p) & ~PTE_FLAGS_MASK);
 	} else {
 		pud_t pud;
 
-		pmd_p = pgtable_area;
+		pmd_p = ppd->pgtable_area;
 		memset(pmd_p, 0, sizeof(*pmd_p) * PTRS_PER_PMD);
-		pgtable_area += sizeof(*pmd_p) * PTRS_PER_PMD;
+		ppd->pgtable_area += sizeof(*pmd_p) * PTRS_PER_PMD;
 
 		pud = native_make_pud((pmdval_t)pmd_p + PUD_FLAGS);
 		native_set_pud(pud_p, pud);
 	}
 
-	pmd_p += pmd_index(vaddr);
+	return pmd_p;
+}
+
+static void __init sme_populate_pgd_large(struct sme_populate_pgd_data *ppd)
+{
+	pmd_t *pmd_p;
+
+	pmd_p = sme_prepare_pgd(ppd);
+	if (!pmd_p)
+		return;
+
+	pmd_p += pmd_index(ppd->vaddr);
 	if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE))
-		native_set_pmd(pmd_p, native_make_pmd(pmd_val));
+		native_set_pmd(pmd_p, native_make_pmd(ppd->paddr | ppd->pmd_flags));
+}
 
-out:
-	return pgtable_area;
+static void __init sme_populate_pgd(struct sme_populate_pgd_data *ppd)
+{
+	pmd_t *pmd_p;
+	pte_t *pte_p;
+
+	pmd_p = sme_prepare_pgd(ppd);
+	if (!pmd_p)
+		return;
+
+	pmd_p += pmd_index(ppd->vaddr);
+	if (native_pmd_val(*pmd_p)) {
+		if (native_pmd_val(*pmd_p) & _PAGE_PSE)
+			return;
+
+		pte_p = (pte_t *)(native_pmd_val(*pmd_p) & ~PTE_FLAGS_MASK);
+	} else {
+		pmd_t pmd;
+
+		pte_p = ppd->pgtable_area;
+		memset(pte_p, 0, sizeof(*pte_p) * PTRS_PER_PTE);
+		ppd->pgtable_area += sizeof(*pte_p) * PTRS_PER_PTE;
+
+		pmd = native_make_pmd((pteval_t)pte_p + PMD_FLAGS);
+		native_set_pmd(pmd_p, pmd);
+	}
+
+	pte_p += pte_index(ppd->vaddr);
+	if (!native_pte_val(*pte_p))
+		native_set_pte(pte_p, native_make_pte(ppd->paddr | ppd->pte_flags));
+}
+
+static void __init __sme_map_range_pmd(struct sme_populate_pgd_data *ppd)
+{
+	while (ppd->vaddr < ppd->vaddr_end) {
+		sme_populate_pgd_large(ppd);
+
+		ppd->vaddr += PMD_PAGE_SIZE;
+		ppd->paddr += PMD_PAGE_SIZE;
+	}
+}
+
+static void __init __sme_map_range_pte(struct sme_populate_pgd_data *ppd)
+{
+	while (ppd->vaddr < ppd->vaddr_end) {
+		sme_populate_pgd(ppd);
+
+		ppd->vaddr += PAGE_SIZE;
+		ppd->paddr += PAGE_SIZE;
+	}
+}
+
+static void __init __sme_map_range(struct sme_populate_pgd_data *ppd,
+				   pmdval_t pmd_flags, pteval_t pte_flags)
+{
+	unsigned long vaddr_end;
+
+	ppd->pmd_flags = pmd_flags;
+	ppd->pte_flags = pte_flags;
+
+	/* Save original end value since we modify the struct value */
+	vaddr_end = ppd->vaddr_end;
+
+	/* If start is not 2MB aligned, create PTE entries */
+	ppd->vaddr_end = ALIGN(ppd->vaddr, PMD_PAGE_SIZE);
+	__sme_map_range_pte(ppd);
+
+	/* Create PMD entries */
+	ppd->vaddr_end = vaddr_end & PMD_PAGE_MASK;
+	__sme_map_range_pmd(ppd);
+
+	/* If end is not 2MB aligned, create PTE entries */
+	ppd->vaddr_end = vaddr_end;
+	__sme_map_range_pte(ppd);
+}
+
+static void __init sme_map_range_encrypted(struct sme_populate_pgd_data *ppd)
+{
+	__sme_map_range(ppd, PMD_FLAGS_ENC, PTE_FLAGS_ENC);
+}
+
+static void __init sme_map_range_decrypted(struct sme_populate_pgd_data *ppd)
+{
+	__sme_map_range(ppd, PMD_FLAGS_DEC, PTE_FLAGS_DEC);
+}
+
+static void __init sme_map_range_decrypted_wp(struct sme_populate_pgd_data *ppd)
+{
+	__sme_map_range(ppd, PMD_FLAGS_DEC_WP, PTE_FLAGS_DEC_WP);
 }
 
 static unsigned long __init sme_pgtable_calc(unsigned long len)
 {
-	unsigned long p4d_size, pud_size, pmd_size;
+	unsigned long p4d_size, pud_size, pmd_size, pte_size;
 	unsigned long total;
 
 	/*
 	 * Perform a relatively simplistic calculation of the pagetable
-	 * entries that are needed. That mappings will be covered by 2MB
-	 * PMD entries so we can conservatively calculate the required
+	 * entries that are needed. Those mappings will be covered mostly
+	 * by 2MB PMD entries so we can conservatively calculate the required
 	 * number of P4D, PUD and PMD structures needed to perform the
-	 * mappings. Incrementing the count for each covers the case where
-	 * the addresses cross entries.
+	 * mappings.  For mappings that are not 2MB aligned, PTE mappings
+	 * would be needed for the start and end portion of the address range
+	 * that fall outside of the 2MB alignment.  This results in, at most,
+	 * two extra pages to hold PTE entries for each range that is mapped.
+	 * Incrementing the count for each covers the case where the addresses
+	 * cross entries.
 	 */
 	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
 		p4d_size = (ALIGN(len, PGDIR_SIZE) / PGDIR_SIZE) + 1;
@@ -585,8 +712,9 @@ static unsigned long __init sme_pgtable_calc(unsigned long len)
 	}
 	pmd_size = (ALIGN(len, PUD_SIZE) / PUD_SIZE) + 1;
 	pmd_size *= sizeof(pmd_t) * PTRS_PER_PMD;
+	pte_size = 2 * sizeof(pte_t) * PTRS_PER_PTE;
 
-	total = p4d_size + pud_size + pmd_size;
+	total = p4d_size + pud_size + pmd_size + pte_size;
 
 	/*
 	 * Now calculate the added pagetable structures needed to populate
@@ -610,29 +738,29 @@ static unsigned long __init sme_pgtable_calc(unsigned long len)
 	return total;
 }
 
-void __init sme_encrypt_kernel(void)
+void __init sme_encrypt_kernel(struct boot_params *bp)
 {
 	unsigned long workarea_start, workarea_end, workarea_len;
 	unsigned long execute_start, execute_end, execute_len;
 	unsigned long kernel_start, kernel_end, kernel_len;
+	unsigned long initrd_start, initrd_end, initrd_len;
+	struct sme_populate_pgd_data ppd;
 	unsigned long pgtable_area_len;
-	unsigned long paddr, pmd_flags;
 	unsigned long decrypted_base;
-	void *pgtable_area;
-	pgd_t *pgd;
 
 	if (!sme_active())
 		return;
 
 	/*
-	 * Prepare for encrypting the kernel by building new pagetables with
-	 * the necessary attributes needed to encrypt the kernel in place.
+	 * Prepare for encrypting the kernel and initrd by building new
+	 * pagetables with the necessary attributes needed to encrypt the
+	 * kernel in place.
 	 *
 	 *   One range of virtual addresses will map the memory occupied
-	 *   by the kernel as encrypted.
+	 *   by the kernel and initrd as encrypted.
 	 *
 	 *   Another range of virtual addresses will map the memory occupied
-	 *   by the kernel as decrypted and write-protected.
+	 *   by the kernel and initrd as decrypted and write-protected.
 	 *
 	 *     The use of write-protect attribute will prevent any of the
 	 *     memory from being cached.
@@ -643,6 +771,20 @@ void __init sme_encrypt_kernel(void)
 	kernel_end = ALIGN(__pa_symbol(_end), PMD_PAGE_SIZE);
 	kernel_len = kernel_end - kernel_start;
 
+	initrd_start = 0;
+	initrd_end = 0;
+	initrd_len = 0;
+#ifdef CONFIG_BLK_DEV_INITRD
+	initrd_len = (unsigned long)bp->hdr.ramdisk_size |
+		     ((unsigned long)bp->ext_ramdisk_size << 32);
+	if (initrd_len) {
+		initrd_start = (unsigned long)bp->hdr.ramdisk_image |
+			       ((unsigned long)bp->ext_ramdisk_image << 32);
+		initrd_end = PAGE_ALIGN(initrd_start + initrd_len);
+		initrd_len = initrd_end - initrd_start;
+	}
+#endif
+
 	/* Set the encryption workarea to be immediately after the kernel */
 	workarea_start = kernel_end;
 
@@ -665,16 +807,21 @@ void __init sme_encrypt_kernel(void)
 	 */
 	pgtable_area_len = sizeof(pgd_t) * PTRS_PER_PGD;
 	pgtable_area_len += sme_pgtable_calc(execute_end - kernel_start) * 2;
+	if (initrd_len)
+		pgtable_area_len += sme_pgtable_calc(initrd_len) * 2;
 
 	/* PUDs and PMDs needed in the current pagetables for the workarea */
 	pgtable_area_len += sme_pgtable_calc(execute_len + pgtable_area_len);
 
 	/*
 	 * The total workarea includes the executable encryption area and
-	 * the pagetable area.
+	 * the pagetable area. The start of the workarea is already 2MB
+	 * aligned, align the end of the workarea on a 2MB boundary so that
+	 * we don't try to create/allocate PTE entries from the workarea
+	 * before it is mapped.
 	 */
 	workarea_len = execute_len + pgtable_area_len;
-	workarea_end = workarea_start + workarea_len;
+	workarea_end = ALIGN(workarea_start + workarea_len, PMD_PAGE_SIZE);
 
 	/*
 	 * Set the address to the start of where newly created pagetable
@@ -683,45 +830,30 @@ void __init sme_encrypt_kernel(void)
 	 * pagetables and when the new encrypted and decrypted kernel
 	 * mappings are populated.
 	 */
-	pgtable_area = (void *)execute_end;
+	ppd.pgtable_area = (void *)execute_end;
 
 	/*
 	 * Make sure the current pagetable structure has entries for
 	 * addressing the workarea.
 	 */
-	pgd = (pgd_t *)native_read_cr3_pa();
-	paddr = workarea_start;
-	while (paddr < workarea_end) {
-		pgtable_area = sme_populate_pgd(pgd, pgtable_area,
-						paddr,
-						paddr + PMD_FLAGS);
-
-		paddr += PMD_PAGE_SIZE;
-	}
+	ppd.pgd = (pgd_t *)native_read_cr3_pa();
+	ppd.paddr = workarea_start;
+	ppd.vaddr = workarea_start;
+	ppd.vaddr_end = workarea_end;
+	sme_map_range_decrypted(&ppd);
 
 	/* Flush the TLB - no globals so cr3 is enough */
 	native_write_cr3(__native_read_cr3());
 
 	/*
 	 * A new pagetable structure is being built to allow for the kernel
-	 * to be encrypted. It starts with an empty PGD that will then be
-	 * populated with new PUDs and PMDs as the encrypted and decrypted
-	 * kernel mappings are created.
+	 * and initrd to be encrypted. It starts with an empty PGD that will
+	 * then be populated with new PUDs and PMDs as the encrypted and
+	 * decrypted kernel mappings are created.
 	 */
-	pgd = pgtable_area;
-	memset(pgd, 0, sizeof(*pgd) * PTRS_PER_PGD);
-	pgtable_area += sizeof(*pgd) * PTRS_PER_PGD;
-
-	/* Add encrypted kernel (identity) mappings */
-	pmd_flags = PMD_FLAGS | _PAGE_ENC;
-	paddr = kernel_start;
-	while (paddr < kernel_end) {
-		pgtable_area = sme_populate_pgd(pgd, pgtable_area,
-						paddr,
-						paddr + pmd_flags);
-
-		paddr += PMD_PAGE_SIZE;
-	}
+	ppd.pgd = ppd.pgtable_area;
+	memset(ppd.pgd, 0, sizeof(pgd_t) * PTRS_PER_PGD);
+	ppd.pgtable_area += sizeof(pgd_t) * PTRS_PER_PGD;
 
 	/*
 	 * A different PGD index/entry must be used to get different
@@ -730,47 +862,79 @@ void __init sme_encrypt_kernel(void)
 	 * the base of the mapping.
 	 */
 	decrypted_base = (pgd_index(workarea_end) + 1) & (PTRS_PER_PGD - 1);
+	if (initrd_len) {
+		unsigned long check_base;
+
+		check_base = (pgd_index(initrd_end) + 1) & (PTRS_PER_PGD - 1);
+		decrypted_base = max(decrypted_base, check_base);
+	}
 	decrypted_base <<= PGDIR_SHIFT;
 
+	/* Add encrypted kernel (identity) mappings */
+	ppd.paddr = kernel_start;
+	ppd.vaddr = kernel_start;
+	ppd.vaddr_end = kernel_end;
+	sme_map_range_encrypted(&ppd);
+
 	/* Add decrypted, write-protected kernel (non-identity) mappings */
-	pmd_flags = (PMD_FLAGS & ~_PAGE_CACHE_MASK) | (_PAGE_PAT | _PAGE_PWT);
-	paddr = kernel_start;
-	while (paddr < kernel_end) {
-		pgtable_area = sme_populate_pgd(pgd, pgtable_area,
-						paddr + decrypted_base,
-						paddr + pmd_flags);
-
-		paddr += PMD_PAGE_SIZE;
+	ppd.paddr = kernel_start;
+	ppd.vaddr = kernel_start + decrypted_base;
+	ppd.vaddr_end = kernel_end + decrypted_base;
+	sme_map_range_decrypted_wp(&ppd);
+
+	if (initrd_len) {
+		/* Add encrypted initrd (identity) mappings */
+		ppd.paddr = initrd_start;
+		ppd.vaddr = initrd_start;
+		ppd.vaddr_end = initrd_end;
+		sme_map_range_encrypted(&ppd);
+		/*
+		 * Add decrypted, write-protected initrd (non-identity) mappings
+		 */
+		ppd.paddr = initrd_start;
+		ppd.vaddr = initrd_start + decrypted_base;
+		ppd.vaddr_end = initrd_end + decrypted_base;
+		sme_map_range_decrypted_wp(&ppd);
 	}
 
 	/* Add decrypted workarea mappings to both kernel mappings */
-	paddr = workarea_start;
-	while (paddr < workarea_end) {
-		pgtable_area = sme_populate_pgd(pgd, pgtable_area,
-						paddr,
-						paddr + PMD_FLAGS);
+	ppd.paddr = workarea_start;
+	ppd.vaddr = workarea_start;
+	ppd.vaddr_end = workarea_end;
+	sme_map_range_decrypted(&ppd);
 
-		pgtable_area = sme_populate_pgd(pgd, pgtable_area,
-						paddr + decrypted_base,
-						paddr + PMD_FLAGS);
-
-		paddr += PMD_PAGE_SIZE;
-	}
+	ppd.paddr = workarea_start;
+	ppd.vaddr = workarea_start + decrypted_base;
+	ppd.vaddr_end = workarea_end + decrypted_base;
+	sme_map_range_decrypted(&ppd);
 
 	/* Perform the encryption */
 	sme_encrypt_execute(kernel_start, kernel_start + decrypted_base,
-			    kernel_len, workarea_start, (unsigned long)pgd);
+			    kernel_len, workarea_start, (unsigned long)ppd.pgd);
+
+	if (initrd_len)
+		sme_encrypt_execute(initrd_start, initrd_start + decrypted_base,
+				    initrd_len, workarea_start,
+				    (unsigned long)ppd.pgd);
 
 	/*
 	 * At this point we are running encrypted.  Remove the mappings for
 	 * the decrypted areas - all that is needed for this is to remove
 	 * the PGD entry/entries.
 	 */
-	sme_clear_pgd(pgd, kernel_start + decrypted_base,
-		      kernel_end + decrypted_base);
+	ppd.vaddr = kernel_start + decrypted_base;
+	ppd.vaddr_end = kernel_end + decrypted_base;
+	sme_clear_pgd(&ppd);
+
+	if (initrd_len) {
+		ppd.vaddr = initrd_start + decrypted_base;
+		ppd.vaddr_end = initrd_end + decrypted_base;
+		sme_clear_pgd(&ppd);
+	}
 
-	sme_clear_pgd(pgd, workarea_start + decrypted_base,
-		      workarea_end + decrypted_base);
+	ppd.vaddr = workarea_start + decrypted_base;
+	ppd.vaddr_end = workarea_end + decrypted_base;
+	sme_clear_pgd(&ppd);
 
 	/* Flush the TLB - no globals so cr3 is enough */
 	native_write_cr3(__native_read_cr3());
diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S
index 730e6d541df1..01f682cf77a8 100644
--- a/arch/x86/mm/mem_encrypt_boot.S
+++ b/arch/x86/mm/mem_encrypt_boot.S
@@ -22,9 +22,9 @@ ENTRY(sme_encrypt_execute)
 
 	/*
 	 * Entry parameters:
-	 *   RDI - virtual address for the encrypted kernel mapping
-	 *   RSI - virtual address for the decrypted kernel mapping
-	 *   RDX - length of kernel
+	 *   RDI - virtual address for the encrypted mapping
+	 *   RSI - virtual address for the decrypted mapping
+	 *   RDX - length to encrypt
 	 *   RCX - virtual address of the encryption workarea, including:
 	 *     - stack page (PAGE_SIZE)
 	 *     - encryption routine page (PAGE_SIZE)
@@ -41,9 +41,9 @@ ENTRY(sme_encrypt_execute)
 	addq	$PAGE_SIZE, %rax	/* Workarea encryption routine */
 
 	push	%r12
-	movq	%rdi, %r10		/* Encrypted kernel */
-	movq	%rsi, %r11		/* Decrypted kernel */
-	movq	%rdx, %r12		/* Kernel length */
+	movq	%rdi, %r10		/* Encrypted area */
+	movq	%rsi, %r11		/* Decrypted area */
+	movq	%rdx, %r12		/* Area length */
 
 	/* Copy encryption routine into the workarea */
 	movq	%rax, %rdi				/* Workarea encryption routine */
@@ -52,10 +52,10 @@ ENTRY(sme_encrypt_execute)
 	rep	movsb
 
 	/* Setup registers for call */
-	movq	%r10, %rdi		/* Encrypted kernel */
-	movq	%r11, %rsi		/* Decrypted kernel */
+	movq	%r10, %rdi		/* Encrypted area */
+	movq	%r11, %rsi		/* Decrypted area */
 	movq	%r8, %rdx		/* Pagetables used for encryption */
-	movq	%r12, %rcx		/* Kernel length */
+	movq	%r12, %rcx		/* Area length */
 	movq	%rax, %r8		/* Workarea encryption routine */
 	addq	$PAGE_SIZE, %r8		/* Workarea intermediate copy buffer */
 
@@ -71,7 +71,7 @@ ENDPROC(sme_encrypt_execute)
 
 ENTRY(__enc_copy)
 /*
- * Routine used to encrypt kernel.
+ * Routine used to encrypt memory in place.
  *   This routine must be run outside of the kernel proper since
  *   the kernel will be encrypted during the process. So this
  *   routine is defined here and then copied to an area outside
@@ -79,19 +79,19 @@ ENTRY(__enc_copy)
  *   during execution.
  *
  *   On entry the registers must be:
- *     RDI - virtual address for the encrypted kernel mapping
- *     RSI - virtual address for the decrypted kernel mapping
+ *     RDI - virtual address for the encrypted mapping
+ *     RSI - virtual address for the decrypted mapping
  *     RDX - address of the pagetables to use for encryption
- *     RCX - length of kernel
+ *     RCX - length of area
  *      R8 - intermediate copy buffer
  *
  *     RAX - points to this routine
  *
- * The kernel will be encrypted by copying from the non-encrypted
- * kernel space to an intermediate buffer and then copying from the
- * intermediate buffer back to the encrypted kernel space. The physical
- * addresses of the two kernel space mappings are the same which
- * results in the kernel being encrypted "in place".
+ * The area will be encrypted by copying from the non-encrypted
+ * memory space to an intermediate buffer and then copying from the
+ * intermediate buffer back to the encrypted memory space. The physical
+ * addresses of the two mappings are the same which results in the area
+ * being encrypted "in place".
  */
 	/* Enable the new page tables */
 	mov	%rdx, %cr3
@@ -103,47 +103,55 @@ ENTRY(__enc_copy)
 	orq	$X86_CR4_PGE, %rdx
 	mov	%rdx, %cr4
 
+	push	%r15
+	push	%r12
+
+	movq	%rcx, %r9		/* Save area length */
+	movq	%rdi, %r10		/* Save encrypted area address */
+	movq	%rsi, %r11		/* Save decrypted area address */
+
 	/* Set the PAT register PA5 entry to write-protect */
-	push	%rcx
 	movl	$MSR_IA32_CR_PAT, %ecx
 	rdmsr
-	push	%rdx			/* Save original PAT value */
+	mov	%rdx, %r15		/* Save original PAT value */
 	andl	$0xffff00ff, %edx	/* Clear PA5 */
 	orl	$0x00000500, %edx	/* Set PA5 to WP */
 	wrmsr
-	pop	%rdx			/* RDX contains original PAT value */
-	pop	%rcx
-
-	movq	%rcx, %r9		/* Save kernel length */
-	movq	%rdi, %r10		/* Save encrypted kernel address */
-	movq	%rsi, %r11		/* Save decrypted kernel address */
 
 	wbinvd				/* Invalidate any cache entries */
 
-	/* Copy/encrypt 2MB at a time */
+	/* Copy/encrypt up to 2MB at a time */
+	movq	$PMD_PAGE_SIZE, %r12
 1:
-	movq	%r11, %rsi		/* Source - decrypted kernel */
+	cmpq	%r12, %r9
+	jnb	2f
+	movq	%r9, %r12
+
+2:
+	movq	%r11, %rsi		/* Source - decrypted area */
 	movq	%r8, %rdi		/* Dest   - intermediate copy buffer */
-	movq	$PMD_PAGE_SIZE, %rcx	/* 2MB length */
+	movq	%r12, %rcx
 	rep	movsb
 
 	movq	%r8, %rsi		/* Source - intermediate copy buffer */
-	movq	%r10, %rdi		/* Dest   - encrypted kernel */
-	movq	$PMD_PAGE_SIZE, %rcx	/* 2MB length */
+	movq	%r10, %rdi		/* Dest   - encrypted area */
+	movq	%r12, %rcx
 	rep	movsb
 
-	addq	$PMD_PAGE_SIZE, %r11
-	addq	$PMD_PAGE_SIZE, %r10
-	subq	$PMD_PAGE_SIZE, %r9	/* Kernel length decrement */
+	addq	%r12, %r11
+	addq	%r12, %r10
+	subq	%r12, %r9		/* Kernel length decrement */
 	jnz	1b			/* Kernel length not zero? */
 
 	/* Restore PAT register */
-	push	%rdx			/* Save original PAT value */
 	movl	$MSR_IA32_CR_PAT, %ecx
 	rdmsr
-	pop	%rdx			/* Restore original PAT value */
+	mov	%r15, %rdx		/* Restore original PAT value */
 	wrmsr
 
+	pop	%r12
+	pop	%r15
+
 	ret
 .L__enc_copy_end:
 ENDPROC(__enc_copy)
diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h
index 06097ef30449..b511f6d24b42 100644
--- a/include/linux/crash_core.h
+++ b/include/linux/crash_core.h
@@ -42,6 +42,8 @@ phys_addr_t paddr_vmcoreinfo_note(void);
 	vmcoreinfo_append_str("PAGESIZE=%ld\n", value)
 #define VMCOREINFO_SYMBOL(name) \
 	vmcoreinfo_append_str("SYMBOL(%s)=%lx\n", #name, (unsigned long)&name)
+#define VMCOREINFO_SYMBOL_ARRAY(name) \
+	vmcoreinfo_append_str("SYMBOL(%s)=%lx\n", #name, (unsigned long)name)
 #define VMCOREINFO_SIZE(name) \
 	vmcoreinfo_append_str("SIZE(%s)=%lu\n", #name, \
 			      (unsigned long)sizeof(name))
diff --git a/kernel/crash_core.c b/kernel/crash_core.c
index b3663896278e..4f63597c824d 100644
--- a/kernel/crash_core.c
+++ b/kernel/crash_core.c
@@ -410,7 +410,7 @@ static int __init crash_save_vmcoreinfo_init(void)
 	VMCOREINFO_SYMBOL(contig_page_data);
 #endif
 #ifdef CONFIG_SPARSEMEM
-	VMCOREINFO_SYMBOL(mem_section);
+	VMCOREINFO_SYMBOL_ARRAY(mem_section);
 	VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);
 	VMCOREINFO_STRUCT_SIZE(mem_section);
 	VMCOREINFO_OFFSET(mem_section, section_mem_map);

             reply	other threads:[~2018-01-17 15:41 UTC|newest]

Thread overview: 559+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-17 15:41 Ingo Molnar [this message]
2018-01-17 20:35 ` [GIT PULL] x86 fixes Linus Torvalds
2018-01-18  0:24   ` Ingo Molnar
2018-01-18  0:29     ` Andrew Morton
  -- strict thread matches above, loose matches on Subject: below --
2023-12-23 14:34 Ingo Molnar
2023-12-23 20:21 ` pr-tracker-bot
2023-11-26  9:51 Ingo Molnar
2023-11-26 17:16 ` pr-tracker-bot
2023-10-28 10:50 Ingo Molnar
2023-10-28 18:17 ` pr-tracker-bot
2023-10-14 22:08 Ingo Molnar
2023-10-14 22:49 ` pr-tracker-bot
2023-10-08  9:41 Ingo Molnar
2023-10-08 18:06 ` pr-tracker-bot
2023-10-01  9:04 Ingo Molnar
2023-10-01 17:08 ` pr-tracker-bot
2023-09-22 10:33 Ingo Molnar
2023-09-22 20:19 ` pr-tracker-bot
2023-09-17 17:44 Ingo Molnar
2023-09-17 18:24 ` pr-tracker-bot
2023-09-02 10:24 [GIT PULL] x86 fix Ingo Molnar
2023-09-10 16:26 ` [GIT PULL] x86 fixes Ingo Molnar
2023-09-10 18:08   ` pr-tracker-bot
2023-08-26 17:54 Ingo Molnar
2023-08-26 18:08 ` pr-tracker-bot
2023-02-11  8:59 Ingo Molnar
2023-02-11 19:24 ` pr-tracker-bot
2022-08-28 14:57 Ingo Molnar
2022-08-28 18:18 ` pr-tracker-bot
2022-08-06 19:29 Ingo Molnar
2022-08-07  0:50 ` pr-tracker-bot
2021-03-28 10:44 Ingo Molnar
2021-03-28 19:22 ` pr-tracker-bot
2020-10-11  8:08 Ingo Molnar
2020-10-11 18:00 ` Linus Torvalds
2020-10-11 20:00   ` Thomas Gleixner
2020-10-11 18:23 ` pr-tracker-bot
2020-09-06  8:15 Ingo Molnar
2020-09-06 19:14 ` pr-tracker-bot
2020-08-15 11:45 Ingo Molnar
2020-08-16  1:55 ` pr-tracker-bot
2020-07-25 11:46 Ingo Molnar
2020-07-25 22:30 ` pr-tracker-bot
2020-03-02  8:49 Ingo Molnar
2020-03-03 23:35 ` pr-tracker-bot
2020-01-31 11:52 Ingo Molnar
2020-01-31 19:35 ` pr-tracker-bot
2020-01-18 18:52 Ingo Molnar
2020-01-18 21:05 ` pr-tracker-bot
2019-12-01 22:22 Ingo Molnar
2019-12-02  4:40 ` pr-tracker-bot
2019-11-16 21:42 Ingo Molnar
2019-11-17  0:35 ` pr-tracker-bot
2019-10-12 13:19 Ingo Molnar
2019-10-12 22:35 ` pr-tracker-bot
2019-09-12 12:57 Ingo Molnar
2019-09-12 14:05 ` pr-tracker-bot
2019-09-05  8:07 Ingo Molnar
2019-09-05 21:15 ` pr-tracker-bot
2019-06-29  9:14 Ingo Molnar
2019-06-29 11:45 ` pr-tracker-bot
2019-06-02 17:44 Ingo Molnar
2019-06-02 18:15 ` pr-tracker-bot
2019-05-16 16:26 Ingo Molnar
2019-05-16 18:20 ` pr-tracker-bot
2019-04-27 14:42 Ingo Molnar
2019-04-27 18:45 ` pr-tracker-bot
2019-04-20  7:38 Ingo Molnar
2019-04-20 19:25 ` pr-tracker-bot
2019-04-12 13:10 Ingo Molnar
2019-04-13  4:05 ` pr-tracker-bot
2019-02-17 10:19 Ingo Molnar
2019-02-17 16:50 ` pr-tracker-bot
2019-02-10  9:13 Ingo Molnar
2019-02-10 18:30 ` pr-tracker-bot
2019-01-11  7:14 Ingo Molnar
2019-01-11 18:00 ` pr-tracker-bot
2018-12-21 12:25 Ingo Molnar
2018-12-21 19:30 ` pr-tracker-bot
2018-12-09 22:06 Ingo Molnar
2018-12-09 23:45 ` pr-tracker-bot
2018-11-30  6:29 Ingo Molnar
2018-11-30 21:00 ` pr-tracker-bot
2018-11-03 23:09 Ingo Molnar
2018-11-04  1:27 ` Linus Torvalds
2018-10-20  8:54 Ingo Molnar
2018-10-20 13:28 ` Greg Kroah-Hartman
2018-10-11  9:14 Ingo Molnar
2018-10-11 12:32 ` Greg Kroah-Hartman
2018-10-05  9:53 Ingo Molnar
2018-10-05 23:06 ` Greg Kroah-Hartman
2018-09-15 13:24 Ingo Molnar
2018-07-30 17:59 Ingo Molnar
2018-06-30  8:49 Ingo Molnar
2018-06-30 19:01 ` Linus Torvalds
2018-07-02 18:47   ` Andy Lutomirski
2018-07-02 18:53     ` Linus Torvalds
2018-07-03  7:56       ` Ingo Molnar
2018-03-31 10:36 Ingo Molnar
2018-02-15  0:45 Ingo Molnar
2018-01-12 13:56 Ingo Molnar
2017-12-15 15:43 Ingo Molnar
2017-12-15 15:50 ` Andy Lutomirski
2017-12-15 16:07   ` Ingo Molnar
2017-12-17  3:25     ` Andy Lutomirski
2017-12-17  8:32       ` Ingo Molnar
2017-12-17 11:41       ` Thomas Gleixner
2017-12-17 15:15         ` Borislav Petkov
2017-12-06 22:36 Ingo Molnar
2017-11-26 12:48 Ingo Molnar
2017-11-05 14:46 Ingo Molnar
2017-10-27 19:24 Ingo Molnar
2017-10-14 16:16 Ingo Molnar
2017-09-24 11:28 Ingo Molnar
2017-09-13 17:54 Ingo Molnar
2017-09-12 15:38 Ingo Molnar
2017-08-26  7:26 Ingo Molnar
2017-07-21 10:26 Ingo Molnar
2017-06-10  9:03 Ingo Molnar
2017-06-02  6:54 Ingo Molnar
2017-05-12  7:39 Ingo Molnar
2017-03-07 20:40 Ingo Molnar
2017-02-28  8:08 Ingo Molnar
2017-02-11 18:18 Ingo Molnar
2017-02-02 21:04 Ingo Molnar
2017-01-15 10:06 Ingo Molnar
2016-12-23 22:57 Ingo Molnar
2016-12-07 18:53 Ingo Molnar
2016-11-22 15:41 Ingo Molnar
2016-11-14  8:03 Ingo Molnar
2016-10-28  8:41 Ingo Molnar
2016-10-22 11:16 Ingo Molnar
2016-10-18 11:22 Ingo Molnar
2016-09-13 18:20 Ingo Molnar
2016-08-18 20:49 Ingo Molnar
2016-08-12 19:46 Ingo Molnar
2016-08-06  6:13 Ingo Molnar
2016-07-13 12:54 Ingo Molnar
2016-07-08 14:00 Ingo Molnar
2016-06-10 14:43 Ingo Molnar
2016-05-25 22:00 Ingo Molnar
2016-05-10 12:01 Ingo Molnar
2016-05-06 19:20 Ingo Molnar
2016-04-28 18:00 Ingo Molnar
2016-04-23 11:38 Ingo Molnar
2016-04-14 14:13 Ingo Molnar
2016-03-24  8:01 Ingo Molnar
2016-03-12 19:06 Ingo Molnar
2016-02-20 11:30 Ingo Molnar
2016-01-14 10:16 Ingo Molnar
2016-01-08 12:57 Ingo Molnar
2015-10-23 11:45 Ingo Molnar
2015-10-03 10:24 Ingo Molnar
2015-10-03 10:57 ` Ingo Molnar
2015-10-03 19:40   ` Thomas Gleixner
2015-09-17  8:28 Ingo Molnar
2015-08-22 12:21 Ingo Molnar
2015-08-14  7:15 Ingo Molnar
2015-08-14 18:25 ` Linus Torvalds
2015-08-14 18:46   ` Andy Lutomirski
2015-08-14 18:57     ` Linus Torvalds
2015-08-14 19:06       ` Linus Torvalds
2015-08-14 19:18         ` Andy Lutomirski
2015-08-14 19:37           ` Linus Torvalds
2015-08-14 19:14       ` Andy Lutomirski
2015-08-17  8:01   ` Ingo Molnar
2015-08-17 10:59     ` Denys Vlasenko
2015-08-17 16:57       ` Linus Torvalds
2015-08-18  7:57         ` Ingo Molnar
2015-08-17 16:47     ` Linus Torvalds
2015-08-17 16:58       ` H. Peter Anvin
2015-08-17 17:17         ` Linus Torvalds
2015-08-17 22:17           ` H. Peter Anvin
2015-08-19  5:59             ` Ingo Molnar
2015-08-19  6:15               ` Ingo Molnar
2015-08-19  6:50               ` Ingo Molnar
2015-08-19 10:00                 ` H. Peter Anvin
2015-08-19 22:33                   ` Linus Torvalds
2015-08-20  6:54                     ` H. Peter Anvin
2015-08-19 21:53                 ` H. Peter Anvin
2015-08-21 10:17                 ` Denys Vlasenko
2015-08-17 23:47           ` Bryan O'Donoghue
2015-08-17 21:03     ` H. Peter Anvin
2015-08-17 23:59     ` Andy Lutomirski
2015-08-18  0:01       ` H. Peter Anvin
2015-08-18  0:06       ` H. Peter Anvin
2015-08-18  0:19         ` Andy Lutomirski
2015-08-18  5:56           ` H. Peter Anvin
2015-08-18  5:59           ` H. Peter Anvin
2015-08-18  7:55       ` Ingo Molnar
2015-08-01  8:44 Ingo Molnar
2015-07-18  3:18 Ingo Molnar
2015-07-20  7:20 ` Heiko Carstens
2015-07-04 11:29 Ingo Molnar
2015-06-05  8:40 Ingo Molnar
2015-05-27 12:54 Ingo Molnar
2015-05-06 12:58 Ingo Molnar
2015-05-06 18:14 ` Linus Torvalds
2015-04-18 15:26 Ingo Molnar
2015-04-03 13:16 Ingo Molnar
2015-03-17 16:54 Ingo Molnar
2015-03-05 17:02 Ingo Molnar
2015-03-01 17:14 Ingo Molnar
2015-02-20 13:47 Ingo Molnar
2015-01-11  8:51 Ingo Molnar
2014-12-14 19:46 Ingo Molnar
2014-11-20  8:02 Ingo Molnar
2014-11-16  9:07 Ingo Molnar
2014-11-17  7:42 ` Markus Trippelsdorf
2014-11-17  8:27   ` Markus Trippelsdorf
2014-11-17 13:58     ` Ingo Molnar
2014-11-17 21:02       ` Kees Cook
2014-11-17 21:05         ` Markus Trippelsdorf
2014-11-17 21:21         ` Markus Trippelsdorf
2014-11-17 23:09           ` Kees Cook
2014-10-31 11:26 Ingo Molnar
2014-09-27  6:02 Ingo Molnar
2014-09-19 10:40 Ingo Molnar
2014-09-23  5:22 ` Linus Torvalds
2014-09-23  5:35   ` Ingo Molnar
2014-09-23  5:37     ` Ingo Molnar
2014-09-23  5:44       ` H. Peter Anvin
2014-09-23  5:59         ` Linus Torvalds
2014-09-23  6:07           ` Linus Torvalds
2014-09-23  6:56           ` Matt Fleming
     [not found]             ` <CA+55aFz+2tf7zEGjVmkVuncZssiDdVRKJ=OUfgnDFf2TYN-KvA@mail.gmail.com>
2014-09-23  7:35               ` Matt Fleming
2014-09-23 12:18             ` Josh Boyer
2014-09-23  5:58       ` Ingo Molnar
2014-09-23  7:20         ` Matt Fleming
2014-09-23  8:18           ` Ard Biesheuvel
2014-09-23 13:18             ` Matt Fleming
2014-09-23 13:59               ` Leif Lindholm
2014-09-23 14:25               ` Maarten Lankhorst
2014-09-23 14:37                 ` Matt Fleming
2014-09-23 16:01                   ` Linus Torvalds
2014-09-24  7:26                   ` Ingo Molnar
2014-09-24 11:42                     ` Matt Fleming
2014-09-24 13:08                       ` Ingo Molnar
2014-09-24 13:18                         ` Matt Fleming
2014-09-24 13:18                           ` Ingo Molnar
2014-09-23 16:05               ` Linus Torvalds
2014-09-23 16:11                 ` Matt Fleming
2014-09-23 16:17                   ` Josh Boyer
2014-09-23 17:21                     ` Josh Boyer
2014-09-23 20:43                       ` Matt Fleming
2014-08-24 20:28 Ingo Molnar
2014-04-16 13:21 Ingo Molnar
2013-11-13 20:47 Ingo Molnar
2013-10-18 19:11 Ingo Molnar
2013-10-12 17:15 Ingo Molnar
2013-10-12 18:05 ` Linus Torvalds
2013-10-12 18:18   ` H. Peter Anvin
2013-10-12 18:49     ` Ingo Molnar
2013-10-15  7:15       ` Ingo Molnar
2013-10-15 10:57         ` Borislav Petkov
2013-10-12 19:28   ` Matthew Garrett
2013-10-12 19:41     ` Linus Torvalds
2013-10-12 20:35       ` H. Peter Anvin
2013-10-04  7:57 Ingo Molnar
2013-09-25 18:16 Ingo Molnar
2013-09-18 16:24 Ingo Molnar
2013-09-05 11:03 Ingo Molnar
2013-08-19 11:23 Ingo Molnar
2013-04-14 15:55 Ingo Molnar
2013-02-26 12:10 Ingo Molnar
2013-02-04 18:31 Ingo Molnar
2012-10-26 14:52 Ingo Molnar
2012-09-21 19:15 Ingo Molnar
2012-08-23 10:54 Ingo Molnar
2012-08-20  9:21 Ingo Molnar
2012-08-21  8:00 ` Ingo Molnar
2012-08-03 16:51 Ingo Molnar
2012-06-29 15:33 Ingo Molnar
2012-06-15 18:53 Ingo Molnar
2012-06-08 14:46 Ingo Molnar
2012-05-17  8:24 Ingo Molnar
2012-04-27  6:57 Ingo Molnar
2012-04-03 22:45 Ingo Molnar
2012-04-03 23:47 ` Konrad Rzeszutek Wilk
2012-04-04  6:56   ` Ingo Molnar
2012-04-04 13:03     ` Konrad Rzeszutek Wilk
2012-02-27 10:32 Ingo Molnar
2012-02-02 10:10 Ingo Molnar
2012-01-26 20:15 Ingo Molnar
2012-01-15 13:40 Ingo Molnar
2011-12-13 23:00 Ingo Molnar
2011-12-05 19:18 Ingo Molnar
2011-07-07 18:24 Ingo Molnar
2011-06-19  9:09 Ingo Molnar
2011-06-13  9:49 Ingo Molnar
2011-06-07 18:44 Ingo Molnar
2011-05-31 16:30 Ingo Molnar
2011-05-31 16:35 ` Joe Perches
2011-05-31 18:16   ` Borislav Petkov
2011-05-31 19:04     ` Ingo Molnar
2011-05-31 19:51       ` Borislav Petkov
2011-05-31 21:35     ` Linus Torvalds
2011-06-01  6:00       ` Ingo Molnar
2011-06-01  6:08         ` Joe Perches
2011-06-01  6:18         ` Borislav Petkov
2011-06-01  6:24         ` Ingo Molnar
2011-05-23 10:19 Ingo Molnar
2011-05-17 21:43 Ingo Molnar
2011-05-03 11:44 Ingo Molnar
2011-04-29 18:02 Ingo Molnar
2011-04-21 16:06 Ingo Molnar
2011-04-16 10:11 Ingo Molnar
2011-04-07 17:36 Ingo Molnar
2011-04-02 10:52 Ingo Molnar
2011-03-25 13:35 Ingo Molnar
2011-03-22 10:20 Ingo Molnar
2011-03-18 13:54 Ingo Molnar
2011-03-16 16:21 Ingo Molnar
2011-03-10  8:10 Ingo Molnar
2011-02-28 17:37 Ingo Molnar
2011-02-25 19:58 Ingo Molnar
2011-02-15 16:36 Ingo Molnar
2011-02-06 11:18 Ingo Molnar
2011-01-27 17:28 Ingo Molnar
2011-01-24 13:01 Ingo Molnar
2011-01-19 19:01 Ingo Molnar
2011-01-18 19:05 Ingo Molnar
2011-01-15 15:17 Ingo Molnar
2010-12-19 15:30 Ingo Molnar
2010-11-26 13:27 Ingo Molnar
2010-11-11 11:03 Ingo Molnar
2010-10-27 16:05 Ingo Molnar
2010-10-27 16:07 ` Ingo Molnar
2010-09-26  8:50 Ingo Molnar
2010-09-08 13:08 Ingo Molnar
2010-06-02 11:49 Ingo Molnar
2010-03-30 12:30 Ingo Molnar
2010-03-26 15:43 Ingo Molnar
2010-03-29 15:47 ` Linus Torvalds
2010-03-29 16:47   ` Ingo Molnar
2010-03-13 16:39 Ingo Molnar
2010-01-31 17:19 Ingo Molnar
2010-01-16 17:03 Ingo Molnar
2010-01-16 20:34 ` Linus Torvalds
2010-01-16 20:53   ` Cyrill Gorcunov
2010-01-16 21:16     ` Ian Campbell
2010-01-16 22:12       ` Cyrill Gorcunov
2010-01-17  0:50       ` H. Peter Anvin
2010-01-16 21:06   ` H. Peter Anvin
2010-01-16 21:09     ` H. Peter Anvin
2010-01-17  0:18   ` Brian Gerst
2010-01-17  6:00     ` Ian Campbell
2009-12-31 12:03 Ingo Molnar
2009-12-31 12:56 ` Borislav Petkov
2009-12-18 18:56 Ingo Molnar
2009-12-15 20:36 Ingo Molnar
2009-12-14 19:06 Ingo Molnar
2009-12-10 19:42 Ingo Molnar
2009-11-10 17:40 Ingo Molnar
2009-11-04 15:48 Ingo Molnar
2009-11-01 15:24 Ingo Molnar
2009-10-23 14:40 Ingo Molnar
2009-10-15 10:55 Ingo Molnar
2009-10-13 18:15 Ingo Molnar
2009-10-08 18:57 Ingo Molnar
2009-10-02 12:36 Ingo Molnar
2009-09-26 12:21 Ingo Molnar
2009-09-21 12:59 Ingo Molnar
2009-08-28 10:40 Ingo Molnar
2009-08-25 18:00 Ingo Molnar
2009-08-17 21:37 Ingo Molnar
2009-08-13 18:49 Ingo Molnar
2009-08-09 16:01 Ingo Molnar
2009-08-04 18:55 Ingo Molnar
2009-06-26 19:07 Ingo Molnar
2009-06-12 10:47 Ingo Molnar
2009-05-18 14:38 Ingo Molnar
2009-05-08 18:46 Ingo Molnar
2009-05-05  9:26 Ingo Molnar
2009-04-26 17:18 Ingo Molnar
2009-04-17  1:32 Ingo Molnar
2009-04-13 17:36 Ingo Molnar
2009-04-09 15:47 Ingo Molnar
2009-04-03 22:46 Ingo Molnar
2009-03-06 18:36 [git pull] " Ingo Molnar
2009-03-03 20:59 Ingo Molnar
2009-03-02  8:47 Ingo Molnar
2009-02-27 16:28 Ingo Molnar
2009-02-21 17:08 Ingo Molnar
2009-02-20 14:18 Ingo Molnar
2009-02-19 17:10 Ingo Molnar
2009-02-21  2:13 ` Linus Torvalds
2009-02-21  6:56   ` H. Peter Anvin
2009-02-21  8:32   ` Ingo Molnar
2009-02-21  8:39     ` Ingo Molnar
2009-02-21  8:42       ` H. Peter Anvin
2009-02-21  9:18     ` Sam Ravnborg
2009-02-21  9:46       ` Ingo Molnar
2009-02-17 16:36 Ingo Molnar
2009-02-11 14:31 Ingo Molnar
2009-02-04 19:22 Ingo Molnar
2009-01-30 23:00 Ingo Molnar
2009-01-26 17:17 Ingo Molnar
2009-01-26 19:05 ` Andrew Morton
2009-01-26 19:20   ` Ingo Molnar
2009-01-26 19:40     ` Andrew Morton
2009-01-26 19:59       ` Ingo Molnar
2009-01-26 20:14         ` Andrew Morton
2009-01-26 20:28           ` Ingo Molnar
2009-01-19 23:23 Ingo Molnar
2009-01-12 18:28 Ingo Molnar
2009-01-11 14:39 Ingo Molnar
2009-01-11 16:45 ` Torsten Kaiser
2009-01-11 18:18   ` Ingo Molnar
2009-01-12 18:17   ` Pallipadi, Venkatesh
2009-01-12 19:01     ` Torsten Kaiser
2009-01-12 19:19       ` Pallipadi, Venkatesh
2009-01-12 19:29         ` Pallipadi, Venkatesh
2009-01-12 19:47           ` Linus Torvalds
2009-01-12 19:54             ` Pallipadi, Venkatesh
2009-01-12 20:38               ` Ingo Molnar
2009-01-12 20:52             ` Ingo Molnar
2009-01-12 21:03               ` Harvey Harrison
2009-01-12 21:12                 ` Ingo Molnar
2009-01-12 21:55               ` Torsten Kaiser
2009-01-12 22:03                 ` Ingo Molnar
2009-01-12 20:05           ` Torsten Kaiser
2009-01-12 20:40             ` Ingo Molnar
2009-01-12 21:50               ` Torsten Kaiser
2009-01-12 22:13                 ` Ingo Molnar
2009-01-13 19:20                   ` Torsten Kaiser
2009-01-12 22:16                 ` Ingo Molnar
2009-01-02 21:48 Ingo Molnar
2008-12-20 13:43 Ingo Molnar
2008-12-20 19:16 ` Linus Torvalds
2008-12-20 19:31   ` Ingo Molnar
2008-12-20 22:11     ` Linus Torvalds
2008-12-20 20:58   ` Joerg Roedel
2008-12-08 18:26 Ingo Molnar
2008-12-04 19:46 Ingo Molnar
2008-11-29 19:31 Ingo Molnar
2008-11-20 11:22 Ingo Molnar
2008-11-18 20:35 Ingo Molnar
2008-11-06 21:29 Ingo Molnar
2008-11-01 17:06 Ingo Molnar
2008-10-30 23:34 Ingo Molnar
2008-10-28 10:49 Ingo Molnar
2008-10-23 19:33 Ingo Molnar
2008-10-17 17:27 Ingo Molnar
2008-10-15 16:32 Ingo Molnar
2008-10-01 18:05 Ingo Molnar
2008-09-27 21:02 Ingo Molnar
2008-09-23 19:34 Ingo Molnar
2008-09-17  9:58 Ingo Molnar
2008-09-09 19:03 H. Peter Anvin
2008-09-08 19:32 H. Peter Anvin
2008-09-08 20:34 ` David Sanders
2008-09-08 21:20   ` H. Peter Anvin
2008-09-08 21:22     ` H. Peter Anvin
2008-09-08 21:43   ` H. Peter Anvin
2008-09-08 22:16     ` David Sanders
2008-09-09  6:05       ` Ingo Molnar
2008-09-09  7:19         ` Ingo Molnar
2008-09-09 19:18 ` David Sanders
2008-09-09 19:56   ` Linus Torvalds
2008-09-09 20:37     ` David Sanders
2008-09-09 20:45       ` Linus Torvalds
2008-09-09 20:46         ` Linus Torvalds
2008-09-09 20:49           ` Ingo Molnar
2008-09-09 20:53           ` David Sanders
2008-09-08 17:52 H. Peter Anvin
2008-09-08 18:04 ` Linus Torvalds
2008-09-08 18:17   ` Linus Torvalds
2008-09-08 22:42     ` Andi Kleen
2008-09-08 18:22   ` H. Peter Anvin
2008-09-08 18:46     ` Arjan van de Ven
2008-09-08 18:51       ` H. Peter Anvin
2008-09-08 19:02         ` Ingo Molnar
2008-09-08 19:30           ` Linus Torvalds
2008-09-08 19:55             ` Arjan van de Ven
2008-09-08 20:14               ` H. Peter Anvin
2008-09-08 23:17             ` Krzysztof Halasa
2008-09-08 18:42               ` Arjan van de Ven
2008-09-09 10:24               ` Andi Kleen
2008-09-09 14:54                 ` Linus Torvalds
2008-09-09 17:01                 ` H. Peter Anvin
2008-09-09 17:17                 ` Mark Lord
2008-09-09 17:19                   ` H. Peter Anvin
2008-09-09 17:48                   ` Mark Lord
2008-09-09 18:40                   ` Andi Kleen
2008-09-09 16:05             ` Adrian Bunk
2008-09-09 16:15               ` Linus Torvalds
2008-09-08 20:25           ` Valdis.Kletnieks
2008-09-09  7:27             ` Ingo Molnar
2008-09-08 22:43       ` Andi Kleen
2008-09-09 16:57   ` Adrian Bunk
2008-09-09 17:03     ` H. Peter Anvin
2008-09-09 17:43       ` Adrian Bunk
2008-09-09 18:12         ` H. Peter Anvin
2008-09-06 19:01 Ingo Molnar
2008-09-05 18:51 Ingo Molnar
2008-08-28 11:41 Ingo Molnar
2008-08-25 17:50 Ingo Molnar
2008-08-22 12:23 Ingo Molnar
2008-08-18 18:36 Ingo Molnar
2008-07-31 21:42 Ingo Molnar
2008-07-29 15:53 Ingo Molnar
2008-07-26 19:15 Ingo Molnar
2008-07-24 15:12 Ingo Molnar
2008-07-24 19:36 ` Linus Torvalds
2008-07-24 20:38   ` H. Peter Anvin
2008-07-22 14:03 Ingo Molnar
2008-07-22 14:35 ` Johannes Weiner
2008-07-22 15:08   ` Jeremy Fitzhardinge
2008-07-22 15:23     ` Johannes Weiner
2008-07-17 17:32 Ingo Molnar
2008-07-15 15:01 Ingo Molnar
2008-07-15 15:13 ` Ingo Molnar
2008-07-15 16:03   ` Linus Torvalds
2008-07-05 19:29 Ingo Molnar
2008-07-04 16:48 Ingo Molnar
2008-06-30 15:30 Ingo Molnar
2008-06-19 15:13 Ingo Molnar
2008-06-19 21:29 ` Simon Holm Thøgersen
2008-06-19 23:34   ` Suresh Siddha
2008-06-12 19:51 Ingo Molnar
2008-05-13 19:27 Ingo Molnar
2008-05-13 19:40 ` Adrian Bunk
2008-05-13 20:02   ` Adrian Bunk
2008-05-13 20:38   ` Adrian Bunk
2008-05-13 21:01   ` H. Peter Anvin
2008-05-13 20:20 ` Linus Torvalds
2008-05-04 19:35 Ingo Molnar
2008-05-05 15:12 ` Adrian Bunk
2008-05-05 15:29   ` Andres Salomon
2008-05-06 12:49     ` Thomas Gleixner
2008-05-07 15:41       ` Andres Salomon
2008-05-07 19:08         ` Thomas Gleixner
2008-05-07 19:48           ` Andres Salomon
2008-05-07 20:07             ` Andrew Morton
2008-05-09 10:28         ` Ingo Molnar
2008-04-30 21:24 Ingo Molnar
2008-04-24 21:37 Ingo Molnar
2008-04-07 19:38 Ingo Molnar
2008-03-27 20:03 Ingo Molnar
2008-03-27 20:31 ` Linus Torvalds
2008-03-27 20:48   ` Harvey Harrison
2008-03-27 20:55     ` Ingo Molnar
2008-03-27 21:01       ` Ingo Molnar
2008-03-27 21:08         ` Harvey Harrison
2008-03-27 20:50   ` Ingo Molnar
2008-03-27 21:24     ` Ingo Molnar
2008-03-26 21:41 Ingo Molnar
2008-03-21 16:20 Ingo Molnar
2008-03-11 16:12 Ingo Molnar
2008-03-07 15:50 Ingo Molnar
2008-03-04 16:59 Ingo Molnar
2008-03-03 13:18 Ingo Molnar
2008-01-01 17:21 Ingo Molnar
2007-12-21  0:46 Ingo Molnar
2007-12-19 23:04 Ingo Molnar
2007-12-04 16:41 Ingo Molnar
2007-12-02 19:12 Ingo Molnar
2007-12-03 16:23 ` Linus Torvalds
2007-12-03 16:38   ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180117154134.bgocrlokyobeyfyu@gmail.com \
    --to=mingo@kernel.org \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).