linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] [0/7] Some random x86 patches that should all go into git-x86
@ 2008-01-16 22:27 Andi Kleen
  2008-01-16 22:27 ` [PATCH] [1/7] i386: Move MWAIT idle check to generic CPU initialization Andi Kleen
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: Andi Kleen @ 2008-01-16 22:27 UTC (permalink / raw)
  To: mingo, tglx, linux-kernel


Some are reposts, some are not. See patch descriptions for details.
I believe I addressed all feedback that made sense in the reposted
patches.

-Andi

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH] [1/7] i386: Move MWAIT idle check to generic CPU initialization
  2008-01-16 22:27 [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Andi Kleen
@ 2008-01-16 22:27 ` Andi Kleen
  2008-01-16 22:27 ` [PATCH] [2/7] Use the correct cpuid method to detect MWAIT support for C states Andi Kleen
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Andi Kleen @ 2008-01-16 22:27 UTC (permalink / raw)
  To: mingo, tglx, linux-kernel


Previously it was only run for Intel CPUs, but AMD Fam10h implements MWAIT too.

This matches 64bit behaviour.

Signed-off-by: Andi Kleen <ak@suse.de>

---
 arch/x86/kernel/cpu/common.c |    2 ++
 arch/x86/kernel/cpu/intel.c  |    1 -
 2 files changed, 2 insertions(+), 1 deletion(-)

Index: linux/arch/x86/kernel/cpu/common.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/common.c
+++ linux/arch/x86/kernel/cpu/common.c
@@ -499,6 +499,8 @@ void __cpuinit identify_cpu(struct cpuin
 
 	/* Init Machine Check Exception if available. */
 	mcheck_init(c);
+
+	select_idle_routine(c);
 }
 
 void __init identify_boot_cpu(void)
Index: linux/arch/x86/kernel/cpu/intel.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/intel.c
+++ linux/arch/x86/kernel/cpu/intel.c
@@ -137,7 +137,6 @@ static void __cpuinit init_intel(struct 
 	}
 #endif
 
-	select_idle_routine(c);
 	l2 = init_intel_cacheinfo(c);
 	if (c->cpuid_level > 9 ) {
 		unsigned eax = cpuid_eax(10);

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH] [2/7] Use the correct cpuid method to detect MWAIT support for C states
  2008-01-16 22:27 [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Andi Kleen
  2008-01-16 22:27 ` [PATCH] [1/7] i386: Move MWAIT idle check to generic CPU initialization Andi Kleen
@ 2008-01-16 22:27 ` Andi Kleen
  2008-01-16 22:27 ` [PATCH] [3/7] Use shorter addresses in i386 segfault printks Andi Kleen
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Andi Kleen @ 2008-01-16 22:27 UTC (permalink / raw)
  To: andreas.herrmann3, mingo, tglx, linux-kernel


Previously there was a AMD specific quirk to handle the case of
AMD Fam10h MWAIT not supporting any C states. But it turns out
that CPUID already has ways to detectly detect that without
using special quirks. 

The new code simply checks if MWAIT supports at least C1 and doesn't
use it if it doesn't. No more vendor specific code.

Note this is does not simply clear MWAIT because MWAIT can be still
useful even without C states.

Credit goes to Ben Serebrin for pointing out the (nearly) obvious.

Cc: "Andreas Herrmann" <andreas.herrmann3@amd.com>

Signed-off-by: Andi Kleen <ak@suse.de>

---
 arch/x86/kernel/cpu/amd.c    |    3 ---
 arch/x86/kernel/process_32.c |   10 +++++++++-
 arch/x86/kernel/process_64.c |   11 ++++++++++-
 arch/x86/kernel/setup_64.c   |    4 ----
 4 files changed, 19 insertions(+), 9 deletions(-)

Index: linux/arch/x86/kernel/cpu/amd.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/amd.c
+++ linux/arch/x86/kernel/cpu/amd.c
@@ -300,9 +300,6 @@ static void __cpuinit init_amd(struct cp
 		local_apic_timer_disabled = 1;
 #endif
 
-	if (c->x86 == 0x10 && !force_mwait)
-		clear_bit(X86_FEATURE_MWAIT, c->x86_capability);
-
 	/* K6s reports MCEs but don't actually have all the MSRs */
 	if (c->x86 < 6)
 		clear_bit(X86_FEATURE_MCE, c->x86_capability);
Index: linux/arch/x86/kernel/process_32.c
===================================================================
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -285,9 +285,17 @@ static void mwait_idle(void)
 	mwait_idle_with_hints(0, 0);
 }
 
+static int mwait_usable(const struct cpuinfo_x86 *c)
+{
+	if (force_mwait)
+		return 1;
+	/* Any C1 states supported? */
+	return c->cpuid_level >= 5 && ((cpuid_edx(5) >> 4) & 0xf) > 0;
+}
+
 void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c)
 {
-	if (cpu_has(c, X86_FEATURE_MWAIT)) {
+	if (cpu_has(c, X86_FEATURE_MWAIT) && mwait_usable(c)) {
 		printk("monitor/mwait feature present.\n");
 		/*
 		 * Skip, if setup has overridden idle.
Index: linux/arch/x86/kernel/process_64.c
===================================================================
--- linux.orig/arch/x86/kernel/process_64.c
+++ linux/arch/x86/kernel/process_64.c
@@ -280,10 +280,19 @@ static void mwait_idle(void)
 	}
 }
 
+
+static int mwait_usable(const struct cpuinfo_x86 *c)
+{
+	if (force_mwait)
+		return 1;
+	/* Any C1 states supported? */
+	return c->cpuid_level >= 5 && ((cpuid_edx(5) >> 4) & 0xf) > 0;
+}
+
 void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c)
 {
 	static int printed;
-	if (cpu_has(c, X86_FEATURE_MWAIT)) {
+	if (cpu_has(c, X86_FEATURE_MWAIT) && mwait_usable(c)) {
 		/*
 		 * Skip, if setup has overridden idle.
 		 * One CPU supports mwait => All CPUs supports mwait
Index: linux/arch/x86/kernel/setup_64.c
===================================================================
--- linux.orig/arch/x86/kernel/setup_64.c
+++ linux/arch/x86/kernel/setup_64.c
@@ -778,10 +778,6 @@ static void __cpuinit init_amd(struct cp
 	/* MFENCE stops RDTSC speculation */
 	set_cpu_cap(c, X86_FEATURE_MFENCE_RDTSC);
 
-	/* Family 10 doesn't support C states in MWAIT so don't use it */
-	if (c->x86 == 0x10 && !force_mwait)
-		clear_cpu_cap(c, X86_FEATURE_MWAIT);
-
 	if (amd_apic_timer_broken())
 		disable_apic_timer = 1;
 }

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH] [3/7] Use shorter addresses in i386 segfault printks
  2008-01-16 22:27 [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Andi Kleen
  2008-01-16 22:27 ` [PATCH] [1/7] i386: Move MWAIT idle check to generic CPU initialization Andi Kleen
  2008-01-16 22:27 ` [PATCH] [2/7] Use the correct cpuid method to detect MWAIT support for C states Andi Kleen
@ 2008-01-16 22:27 ` Andi Kleen
  2008-01-17  2:58   ` Harvey Harrison
  2008-01-16 22:27 ` [PATCH] [4/7] Print which shared library/executable faulted in segfault etc. messages Andi Kleen
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 14+ messages in thread
From: Andi Kleen @ 2008-01-16 22:27 UTC (permalink / raw)
  To: mingo, tglx, linux-kernel


Signed-off-by: Andi Kleen <ak@suse.de>

---
 arch/x86/mm/fault_32.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux/arch/x86/mm/fault_32.c
===================================================================
--- linux.orig/arch/x86/mm/fault_32.c
+++ linux/arch/x86/mm/fault_32.c
@@ -516,7 +516,7 @@ bad_area_nosemaphore:
 		    printk_ratelimit()) {
 			printk(
 #ifdef CONFIG_X86_32
-			"%s%s[%d]: segfault at %08lx ip %08lx sp %08lx error %lx\n",
+			"%s%s[%d]: segfault at %lx ip %08lx sp %08lx error %lx\n",
 #else
 			"%s%s[%d]: segfault at %lx ip %lx sp %lx error %lx\n",
 #endif

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH] [4/7] Print which shared library/executable faulted in segfault etc. messages
  2008-01-16 22:27 [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Andi Kleen
                   ` (2 preceding siblings ...)
  2008-01-16 22:27 ` [PATCH] [3/7] Use shorter addresses in i386 segfault printks Andi Kleen
@ 2008-01-16 22:27 ` Andi Kleen
  2008-01-16 22:27 ` [PATCH] [5/7] Replace hard coded reservations in x86-64 early boot code with dynamic table v2 Andi Kleen
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Andi Kleen @ 2008-01-16 22:27 UTC (permalink / raw)
  To: mingo, tglx, linux-kernel


They now look like

hal-resmgr[13791]: segfault at 3c rip 2b9c8caec182 rsp 7fff1e825d30 error 4 in libacl.so.1.1.0[2b9c8caea000+6000]

This makes it easier to pinpoint bugs to specific libraries. 

And printing the offset into a mapping also always allows to find the 
correct fault point in a library even with randomized mappings. Previously
there was no way to actually find the correct code address inside
the randomized mapping.

Relies on earlier patch to shorten the printk formats.

They are often now longer than 80 characters, but I think that's worth it.

Patch for i386 and x86-64.

[includes fix from Eric Dumazet to check d_path error value]

Signed-off-by: Andi Kleen <ak@suse.de>

---
 arch/x86/kernel/signal_32.c |    7 +++++--
 arch/x86/kernel/signal_64.c |    7 +++++--
 arch/x86/kernel/traps_32.c  |    7 +++++--
 arch/x86/mm/fault_32.c      |    4 +++-
 include/linux/mm.h          |    1 +
 mm/memory.c                 |   31 +++++++++++++++++++++++++++++++
 6 files changed, 50 insertions(+), 7 deletions(-)

Index: linux/include/linux/mm.h
===================================================================
--- linux.orig/include/linux/mm.h
+++ linux/include/linux/mm.h
@@ -1145,6 +1145,7 @@ extern int randomize_va_space;
 #endif
 
 const char * arch_vma_name(struct vm_area_struct *vma);
+void print_vma_addr(char *prefix, unsigned long rip);
 
 struct page *sparse_mem_map_populate(unsigned long pnum, int nid);
 pgd_t *vmemmap_pgd_populate(unsigned long addr, int node);
Index: linux/mm/memory.c
===================================================================
--- linux.orig/mm/memory.c
+++ linux/mm/memory.c
@@ -2746,3 +2746,34 @@ int access_process_vm(struct task_struct
 
 	return buf - old_buf;
 }
+
+/*
+ * Print the name of a VMA.
+ */
+void print_vma_addr(char *prefix, unsigned long ip)
+{
+	struct mm_struct *mm = current->mm;
+	struct vm_area_struct *vma;
+
+	down_read(&mm->mmap_sem);
+	vma = find_vma(mm, ip);
+	if (vma && vma->vm_file) {
+		struct file *f = vma->vm_file;
+		char *buf = (char *)__get_free_page(GFP_KERNEL);
+		if (buf) {
+			char *p, *s;
+
+			p = d_path(f->f_dentry, f->f_vfsmnt, buf, PAGE_SIZE);
+			if (IS_ERR(p))
+				p = "?";
+			s = strrchr(p, '/');
+			if (s)
+				p = s+1;
+			printk("%s%s[%lx+%lx]", prefix, p,
+					vma->vm_start,
+					vma->vm_end - vma->vm_start);
+			free_page((unsigned long)buf);
+		}
+	}
+	up_read(&current->mm->mmap_sem);
+}
Index: linux/arch/x86/kernel/signal_32.c
===================================================================
--- linux.orig/arch/x86/kernel/signal_32.c
+++ linux/arch/x86/kernel/signal_32.c
@@ -198,12 +198,15 @@ asmlinkage int sys_sigreturn(unsigned lo
 	return ax;
 
 badframe:
-	if (show_unhandled_signals && printk_ratelimit())
+	if (show_unhandled_signals && printk_ratelimit()) {
 		printk("%s%s[%d] bad frame in sigreturn frame:%p ip:%lx"
-		       " sp:%lx oeax:%lx\n",
+		       " sp:%lx oeax:%lx",
 		    task_pid_nr(current) > 1 ? KERN_INFO : KERN_EMERG,
 		    current->comm, task_pid_nr(current), frame, regs->ip,
 		    regs->sp, regs->orig_ax);
+		print_vma_addr(" in ", regs->ip);
+		printk("\n");
+	}
 
 	force_sig(SIGSEGV, current);
 	return 0;
Index: linux/arch/x86/kernel/signal_64.c
===================================================================
--- linux.orig/arch/x86/kernel/signal_64.c
+++ linux/arch/x86/kernel/signal_64.c
@@ -481,9 +481,12 @@ do_notify_resume(struct pt_regs *regs, v
 void signal_fault(struct pt_regs *regs, void __user *frame, char *where)
 { 
 	struct task_struct *me = current; 
-	if (show_unhandled_signals && printk_ratelimit())
-		printk("%s[%d] bad frame in %s frame:%p ip:%lx sp:%lx orax:%lx\n",
+	if (show_unhandled_signals && printk_ratelimit()) {
+		printk("%s[%d] bad frame in %s frame:%p ip:%lx sp:%lx orax:%lx",
 	       me->comm,me->pid,where,frame,regs->ip,regs->sp,regs->orig_ax);
+		print_vma_addr(" in ", regs->ip);
+		printk("\n");
+	}
 
 	force_sig(SIGSEGV, me); 
 } 
Index: linux/arch/x86/kernel/traps_32.c
===================================================================
--- linux.orig/arch/x86/kernel/traps_32.c
+++ linux/arch/x86/kernel/traps_32.c
@@ -608,11 +608,14 @@ void __kprobes do_general_protection(str
 	current->thread.error_code = error_code;
 	current->thread.trap_no = 13;
 	if (show_unhandled_signals && unhandled_signal(current, SIGSEGV) &&
-	    printk_ratelimit())
+	    printk_ratelimit()) {
 		printk(KERN_INFO
-		    "%s[%d] general protection ip:%lx sp:%lx error:%lx\n",
+		    "%s[%d] general protection ip:%lx sp:%lx error:%lx",
 		    current->comm, task_pid_nr(current),
 		    regs->ip, regs->sp, error_code);
+		print_vma_addr(" in ", regs->ip);
+		printk("\n");
+	}
 
 	force_sig(SIGSEGV, current);
 	return;
Index: linux/arch/x86/mm/fault_32.c
===================================================================
--- linux.orig/arch/x86/mm/fault_32.c
+++ linux/arch/x86/mm/fault_32.c
@@ -518,11 +518,13 @@ bad_area_nosemaphore:
 #ifdef CONFIG_X86_32
 			"%s%s[%d]: segfault at %lx ip %08lx sp %08lx error %lx\n",
 #else
-			"%s%s[%d]: segfault at %lx ip %lx sp %lx error %lx\n",
+			"%s%s[%d]: segfault at %lx ip %lx sp %lx error %lx",
 #endif
 			task_pid_nr(tsk) > 1 ? KERN_INFO : KERN_EMERG,
 			tsk->comm, task_pid_nr(tsk), address, regs->ip,
 			regs->sp, error_code);
+			print_vma_addr(" in ", regs->ip);
+			printk("\n");
 		}
 		tsk->thread.cr2 = address;
 		/* Kernel addresses are always protection faults */

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH] [5/7] Replace hard coded reservations in x86-64 early boot code with dynamic table v2
  2008-01-16 22:27 [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Andi Kleen
                   ` (3 preceding siblings ...)
  2008-01-16 22:27 ` [PATCH] [4/7] Print which shared library/executable faulted in segfault etc. messages Andi Kleen
@ 2008-01-16 22:27 ` Andi Kleen
  2008-01-16 22:27 ` [PATCH] [6/7] Optimize lock prefix switching to run less frequently v2 Andi Kleen
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Andi Kleen @ 2008-01-16 22:27 UTC (permalink / raw)
  To: peterz, mingo, tglx, linux-kernel


On x86-64 there are several memory allocations before bootmem. To avoid
them stomping on each other they used to be all hard coded in bad_area().
Replace this with an array that is filled as needed.

This cleans up the code considerably and allows to expand its use.

v1->v2: add one tab

Cc: peterz@infradead.org

Signed-off-by: Andi Kleen <ak@suse.de>

---
 arch/x86/kernel/e820_64.c  |   95 ++++++++++++++++++++++++---------------------
 arch/x86/kernel/head64.c   |   48 ++++++++++++++++++++++
 arch/x86/kernel/setup_64.c |   72 +---------------------------------
 arch/x86/mm/init_64.c      |    5 +-
 arch/x86/mm/numa_64.c      |    1 
 include/asm-x86/e820_64.h  |    4 -
 include/asm-x86/proto.h    |    2 
 7 files changed, 110 insertions(+), 117 deletions(-)

Index: linux/arch/x86/kernel/e820_64.c
===================================================================
--- linux.orig/arch/x86/kernel/e820_64.c
+++ linux/arch/x86/kernel/e820_64.c
@@ -47,56 +47,65 @@ unsigned long end_pfn_map;
  */
 static unsigned long __initdata end_user_pfn = MAXMEM>>PAGE_SHIFT;
 
-/* Check for some hardcoded bad areas that early boot is not allowed to touch */
-static inline int bad_addr(unsigned long *addrp, unsigned long size)
-{
-	unsigned long addr = *addrp, last = addr + size;
-
-	/* various gunk below that needed for SMP startup */
-	if (addr < 0x8000) {
-		*addrp = PAGE_ALIGN(0x8000);
-		return 1;
-	}
-
-	/* direct mapping tables of the kernel */
-	if (last >= table_start<<PAGE_SHIFT && addr < table_end<<PAGE_SHIFT) {
-		*addrp = PAGE_ALIGN(table_end << PAGE_SHIFT);
-		return 1;
-	}
-
-	/* initrd */
-#ifdef CONFIG_BLK_DEV_INITRD
-	if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {
-		unsigned long ramdisk_image = boot_params.hdr.ramdisk_image;
-		unsigned long ramdisk_size  = boot_params.hdr.ramdisk_size;
-		unsigned long ramdisk_end   = ramdisk_image+ramdisk_size;
+/*
+ * Early reserved memory areas.
+ */
+#define MAX_EARLY_RES 20
 
-		if (last >= ramdisk_image && addr < ramdisk_end) {
-			*addrp = PAGE_ALIGN(ramdisk_end);
-			return 1;
-		}
-	}
+struct early_res {
+	unsigned long start, end;
+};
+static struct early_res early_res[MAX_EARLY_RES] __initdata = {
+	{ 0, PAGE_SIZE },			/* BIOS data page */
+#ifdef CONFIG_SMP
+	{ SMP_TRAMPOLINE_BASE, SMP_TRAMPOLINE_BASE + 2*PAGE_SIZE },
 #endif
-	/* kernel code */
-	if (last >= __pa_symbol(&_text) && addr < __pa_symbol(&_end)) {
-		*addrp = PAGE_ALIGN(__pa_symbol(&_end));
-		return 1;
+	{}
+};
+
+void __init reserve_early(unsigned long start, unsigned long end)
+{
+	int i;
+	struct early_res *r;
+	for (i = 0; i < MAX_EARLY_RES && early_res[i].end; i++) {
+		r = &early_res[i];
+		if (end > r->start && start < r->end)
+			panic("Duplicated early reservation %lx-%lx\n",
+			      start, end);
 	}
+	if (i >= MAX_EARLY_RES)
+		panic("Too many early reservations");
+	r = &early_res[i];
+	r->start = start;
+	r->end = end;
+}
 
-	if (last >= ebda_addr && addr < ebda_addr + ebda_size) {
-		*addrp = PAGE_ALIGN(ebda_addr + ebda_size);
-		return 1;
+void __init early_res_to_bootmem(void)
+{
+	int i;
+	for (i = 0; i < MAX_EARLY_RES && early_res[i].end; i++) {
+		struct early_res *r = &early_res[i];
+		reserve_bootmem_generic(r->start, r->end - r->start);
 	}
+}
 
-#ifdef CONFIG_NUMA
-	/* NUMA memory to node map */
-	if (last >= nodemap_addr && addr < nodemap_addr + nodemap_size) {
-		*addrp = nodemap_addr + nodemap_size;
-		return 1;
+/* Check for already reserved areas */
+static inline int bad_addr(unsigned long *addrp, unsigned long size)
+{
+	int i;
+	unsigned long addr = *addrp, last;
+	int changed = 0;
+again:
+	last = addr + size;
+	for (i = 0; i < MAX_EARLY_RES && early_res[i].end; i++) {
+		struct early_res *r = &early_res[i];
+		if (last >= r->start && addr < r->end) {
+			*addrp = addr = r->end;
+			changed = 1;
+			goto again;
+		}
 	}
-#endif
-	/* XXX ramdisk image here? */
-	return 0;
+	return changed;
 }
 
 /*
Index: linux/arch/x86/kernel/head64.c
===================================================================
--- linux.orig/arch/x86/kernel/head64.c
+++ linux/arch/x86/kernel/head64.c
@@ -21,6 +21,7 @@
 #include <asm/tlbflush.h>
 #include <asm/sections.h>
 #include <asm/kdebug.h>
+#include <asm/e820.h>
 
 static void __init zap_identity_mappings(void)
 {
@@ -48,6 +49,35 @@ static void __init copy_bootdata(char *r
 	}
 }
 
+#define EBDA_ADDR_POINTER 0x40E
+
+static __init void reserve_ebda(void)
+{
+	unsigned ebda_addr, ebda_size;
+
+	/*
+	 * there is a real-mode segmented pointer pointing to the
+	 * 4K EBDA area at 0x40E
+	 */
+	ebda_addr = *(unsigned short *)__va(EBDA_ADDR_POINTER);
+	ebda_addr <<= 4;
+
+	if (!ebda_addr)
+		return;
+
+	ebda_size = *(unsigned short *)__va(ebda_addr);
+
+	/* Round EBDA up to pages */
+	if (ebda_size == 0)
+		ebda_size = 1;
+	ebda_size <<= 10;
+	ebda_size = round_up(ebda_size + (ebda_addr & ~PAGE_MASK), PAGE_SIZE);
+	if (ebda_size > 64*1024)
+		ebda_size = 64*1024;
+
+	reserve_early(ebda_addr, ebda_addr + ebda_size);
+}
+
 void __init x86_64_start_kernel(char * real_mode_data)
 {
 	int i;
@@ -75,5 +105,23 @@ void __init x86_64_start_kernel(char * r
 	pda_init(0);
 	copy_bootdata(__va(real_mode_data));
 
+	reserve_early(__pa_symbol(&_text), __pa_symbol(&_end));
+
+	/* Reserve INITRD */
+	if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {
+		unsigned long ramdisk_image = boot_params.hdr.ramdisk_image;
+		unsigned long ramdisk_size  = boot_params.hdr.ramdisk_size;
+		unsigned long ramdisk_end   = ramdisk_image + ramdisk_size;
+		reserve_early(ramdisk_image, ramdisk_end);
+	}
+
+	reserve_ebda();
+
+	/*
+	 * At this point everything still needed from the boot loader
+	 * or BIOS or kernel text should be early reserved or marked not
+	 * RAM in e820. All other memory is free game.
+	 */
+
 	start_kernel();
 }
Index: linux/arch/x86/kernel/setup_64.c
===================================================================
--- linux.orig/arch/x86/kernel/setup_64.c
+++ linux/arch/x86/kernel/setup_64.c
@@ -247,46 +247,6 @@ static inline void __init reserve_crashk
 {}
 #endif
 
-unsigned __initdata ebda_addr;
-unsigned __initdata ebda_size;
-
-static void __init discover_ebda(void)
-{
-	unsigned short *ptr;
-	/*
-	 * there is a real-mode segmented pointer pointing to the
-	 * 4K EBDA area at 0x40E
-	 */
-	/*
-	 * There can be some situations, like paravirtualized guests,
-	 * in which there is no available ebda information. In such
-	 * case, just skip it
-	 */
-
-	ebda_addr = get_bios_ebda();
-	if (!ebda_addr) {
-		ebda_size = 0;
-		return;
-	}
-
-	ptr = (unsigned short *)early_ioremap(ebda_addr, 2);
-	if (!ptr) {
-		ebda_addr = 0;
-		ebda_size = 0;
-		return;
-	}
-	ebda_size = *(unsigned short *)ptr;
-	early_iounmap((char *)ptr, 2);
-
-	/* Round EBDA up to pages */
-	if (ebda_size == 0)
-		ebda_size = 1;
-	ebda_size <<= 10;
-	ebda_size = round_up(ebda_size + (ebda_addr & ~PAGE_MASK), PAGE_SIZE);
-	if (ebda_size > 64*1024)
-		ebda_size = 64*1024;
-}
-
 /* Overridden in paravirt.c if CONFIG_PARAVIRT */
 void __attribute__((weak)) __init memory_setup(void)
 {
@@ -366,8 +326,6 @@ void __init setup_arch(char **cmdline_p)
 
 	check_efer();
 
-	discover_ebda();
-
 	init_memory_mapping(0, (end_pfn_map << PAGE_SHIFT));
 	if (efi_enabled)
 		efi_init();
@@ -414,33 +372,7 @@ void __init setup_arch(char **cmdline_p)
 	contig_initmem_init(0, end_pfn);
 #endif
 
-	/* Reserve direct mapping */
-	reserve_bootmem_generic(table_start << PAGE_SHIFT,
-				(table_end - table_start) << PAGE_SHIFT);
-
-	/* reserve kernel */
-	reserve_bootmem_generic(__pa_symbol(&_text),
-				__pa_symbol(&_end) - __pa_symbol(&_text));
-
-	/*
-	 * reserve physical page 0 - it's a special BIOS page on many boxes,
-	 * enabling clean reboots, SMP operation, laptop functions.
-	 */
-	reserve_bootmem_generic(0, PAGE_SIZE);
-
-	/* reserve ebda region */
-	if (ebda_addr)
-		reserve_bootmem_generic(ebda_addr, ebda_size);
-#ifdef CONFIG_NUMA
-	/* reserve nodemap region */
-	if (nodemap_addr)
-		reserve_bootmem_generic(nodemap_addr, nodemap_size);
-#endif
-
-#ifdef CONFIG_SMP
-	/* Reserve SMP trampoline */
-	reserve_bootmem_generic(SMP_TRAMPOLINE_BASE, 2*PAGE_SIZE);
-#endif
+	early_res_to_bootmem();
 
 #ifdef CONFIG_ACPI_SLEEP
 	/*
@@ -470,6 +402,8 @@ void __init setup_arch(char **cmdline_p)
 			initrd_start = ramdisk_image + PAGE_OFFSET;
 			initrd_end = initrd_start+ramdisk_size;
 		} else {
+			/* Assumes everything on node 0 */
+			free_bootmem(ramdisk_image, ramdisk_size);
 			printk(KERN_ERR "initrd extends beyond end of memory "
 			       "(0x%08lx > 0x%08lx)\ndisabling initrd\n",
 			       ramdisk_end, end_of_mem);
Index: linux/arch/x86/mm/numa_64.c
===================================================================
--- linux.orig/arch/x86/mm/numa_64.c
+++ linux/arch/x86/mm/numa_64.c
@@ -104,6 +104,7 @@ static int __init allocate_cachealigned_
 	}
 	pad_addr = (nodemap_addr + pad) & ~pad;
 	memnodemap = phys_to_virt(pad_addr);
+	reserve_early(nodemap_addr, nodemap_addr + nodemap_size);
 
 	printk(KERN_DEBUG "NUMA: Allocated memnodemap from %lx - %lx\n",
 	       nodemap_addr, nodemap_addr + nodemap_size);
Index: linux/include/asm-x86/e820_64.h
===================================================================
--- linux.orig/include/asm-x86/e820_64.h
+++ linux/include/asm-x86/e820_64.h
@@ -41,8 +41,8 @@ extern void finish_e820_parsing(void);
 extern struct e820map e820;
 extern void update_e820(void);
 
-extern unsigned ebda_addr, ebda_size;
-extern unsigned long nodemap_addr, nodemap_size;
+extern void reserve_early(unsigned long start, unsigned long end);
+extern void early_res_to_bootmem(void);
 
 #endif/*!__ASSEMBLY__*/
 
Index: linux/arch/x86/mm/init_64.c
===================================================================
--- linux.orig/arch/x86/mm/init_64.c
+++ linux/arch/x86/mm/init_64.c
@@ -176,7 +176,8 @@ __set_fixmap (enum fixed_addresses idx, 
 	set_pte_phys(address, phys, prot);
 }
 
-unsigned long __meminitdata table_start, table_end;
+static unsigned long __initdata table_start;
+static unsigned long __meminitdata table_end;
 
 static __meminit void *alloc_low_page(unsigned long *phys)
 { 
@@ -391,6 +392,8 @@ void __init_refok init_memory_mapping(un
 	if (!after_bootmem)
 		mmu_cr4_features = read_cr4();
 	__flush_tlb_all();
+
+	reserve_early(table_start << PAGE_SHIFT, table_end << PAGE_SHIFT);
 }
 
 #ifndef CONFIG_NUMA
Index: linux/include/asm-x86/proto.h
===================================================================
--- linux.orig/include/asm-x86/proto.h
+++ linux/include/asm-x86/proto.h
@@ -22,8 +22,6 @@ extern void syscall32_cpu_init(void);
 
 extern void check_efer(void);
 
-extern unsigned long table_start, table_end;
-
 extern int reboot_force;
 
 long do_arch_prctl(struct task_struct *task, int code, unsigned long addr);

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH] [6/7] Optimize lock prefix switching to run less frequently v2
  2008-01-16 22:27 [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Andi Kleen
                   ` (4 preceding siblings ...)
  2008-01-16 22:27 ` [PATCH] [5/7] Replace hard coded reservations in x86-64 early boot code with dynamic table v2 Andi Kleen
@ 2008-01-16 22:27 ` Andi Kleen
  2008-01-16 22:27 ` [PATCH] [7/7] Don't disable the APIC if it hasn't been mapped yet Andi Kleen
  2008-01-18  9:43 ` [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Ingo Molnar
  7 siblings, 0 replies; 14+ messages in thread
From: Andi Kleen @ 2008-01-16 22:27 UTC (permalink / raw)
  To: mingo, tglx, linux-kernel


On VMs implemented using JITs that cache translated code changing the lock 
prefixes is a quite costly operation that forces the JIT to throw away and 
retranslate a lot of code. 

Previously a SMP kernel would rewrite the locks once for each CPU which
is quite unnecessary. This patch changes the code to never switch at boot in
 the normal case (SMP kernel booting with >1 CPU) or only once for SMP kernel 
on UP.

This makes a significant difference in boot up performance on AMD SimNow!
Also I expect it to be a little faster on native systems too because a smp 
switch does a lot of text_poke()s which each synchronize the pipeline.

v1->v2: Rename max_cpus 
v1->v2: Fix off by one in UP check (Thomas Gleixner)

Signed-off-by: Andi Kleen <ak@suse.de>

---
 arch/x86/kernel/alternative.c |   16 ++++++++++++++--
 include/linux/smp.h           |    2 ++
 init/main.c                   |   16 ++++++++--------
 3 files changed, 24 insertions(+), 10 deletions(-)

Index: linux/arch/x86/kernel/alternative.c
===================================================================
--- linux.orig/arch/x86/kernel/alternative.c
+++ linux/arch/x86/kernel/alternative.c
@@ -273,6 +273,7 @@ struct smp_alt_module {
 };
 static LIST_HEAD(smp_alt_modules);
 static DEFINE_SPINLOCK(smp_alt);
+static int smp_mode = 1;	/* protected by smp_alt */
 
 void alternatives_smp_module_add(struct module *mod, char *name,
 				 void *locks, void *locks_end,
@@ -354,7 +355,14 @@ void alternatives_smp_switch(int smp)
 	BUG_ON(!smp && (num_online_cpus() > 1));
 
 	spin_lock_irqsave(&smp_alt, flags);
-	if (smp) {
+
+	/*
+	 * Avoid unnecessary switches because it forces JIT based VMs to
+	 * throw away all cached translations, which can be quite costly.
+	 */
+	if (smp == smp_mode) {
+		/* nothing */
+	} else if (smp) {
 		printk(KERN_INFO "SMP alternatives: switching to SMP code\n");
 		clear_cpu_cap(&boot_cpu_data, X86_FEATURE_UP);
 		clear_cpu_cap(&cpu_data(0), X86_FEATURE_UP);
@@ -369,6 +377,7 @@ void alternatives_smp_switch(int smp)
 			alternatives_smp_unlock(mod->locks, mod->locks_end,
 						mod->text, mod->text_end);
 	}
+	smp_mode = smp;
 	spin_unlock_irqrestore(&smp_alt, flags);
 }
 
@@ -441,7 +450,10 @@ void __init alternative_instructions(voi
 		alternatives_smp_module_add(NULL, "core kernel",
 					    __smp_locks, __smp_locks_end,
 					    _text, _etext);
-		alternatives_smp_switch(0);
+
+		/* Only switch to UP mode if we don't immediately boot others */
+		if (num_possible_cpus() == 1 || setup_max_cpus <= 1)
+			alternatives_smp_switch(0);
 	}
 #endif
  	apply_paravirt(__parainstructions, __parainstructions_end);
Index: linux/include/linux/smp.h
===================================================================
--- linux.orig/include/linux/smp.h
+++ linux/include/linux/smp.h
@@ -78,6 +78,8 @@ int on_each_cpu(void (*func) (void *info
  */
 void smp_prepare_boot_cpu(void);
 
+extern unsigned int setup_max_cpus;
+
 #else /* !SMP */
 
 /*
Index: linux/init/main.c
===================================================================
--- linux.orig/init/main.c
+++ linux/init/main.c
@@ -128,7 +128,7 @@ static char *ramdisk_execute_command;
 
 #ifdef CONFIG_SMP
 /* Setup configured maximum number of CPUs to activate */
-static unsigned int __initdata max_cpus = NR_CPUS;
+unsigned int __initdata setup_max_cpus = NR_CPUS;
 
 /*
  * Setup routine for controlling SMP activation
@@ -146,7 +146,7 @@ static inline void disable_ioapic_setup(
 
 static int __init nosmp(char *str)
 {
-	max_cpus = 0;
+	setup_max_cpus = 0;
 	disable_ioapic_setup();
 	return 0;
 }
@@ -155,8 +155,8 @@ early_param("nosmp", nosmp);
 
 static int __init maxcpus(char *str)
 {
-	get_option(&str, &max_cpus);
-	if (max_cpus == 0)
+	get_option(&str, &setup_max_cpus);
+	if (setup_max_cpus == 0)
 		disable_ioapic_setup();
 
 	return 0;
@@ -164,7 +164,7 @@ static int __init maxcpus(char *str)
 
 early_param("maxcpus", maxcpus);
 #else
-#define max_cpus NR_CPUS
+#define setup_max_cpus NR_CPUS
 #endif
 
 /*
@@ -393,7 +393,7 @@ static void __init smp_init(void)
 
 	/* FIXME: This should be done in userspace --RR */
 	for_each_present_cpu(cpu) {
-		if (num_online_cpus() >= max_cpus)
+		if (num_online_cpus() >= setup_max_cpus)
 			break;
 		if (!cpu_online(cpu))
 			cpu_up(cpu);
@@ -401,7 +401,7 @@ static void __init smp_init(void)
 
 	/* Any cleanup work */
 	printk(KERN_INFO "Brought up %ld CPUs\n", (long)num_online_cpus());
-	smp_cpus_done(max_cpus);
+	smp_cpus_done(setup_max_cpus);
 }
 
 #endif
@@ -823,7 +823,7 @@ static int __init kernel_init(void * unu
 	__set_special_pids(1, 1);
 	cad_pid = task_pid(current);
 
-	smp_prepare_cpus(max_cpus);
+	smp_prepare_cpus(setup_max_cpus);
 
 	do_pre_smp_initcalls();
 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH] [7/7] Don't disable the APIC if it hasn't been mapped yet
  2008-01-16 22:27 [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Andi Kleen
                   ` (5 preceding siblings ...)
  2008-01-16 22:27 ` [PATCH] [6/7] Optimize lock prefix switching to run less frequently v2 Andi Kleen
@ 2008-01-16 22:27 ` Andi Kleen
  2008-01-18  9:43 ` [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Ingo Molnar
  7 siblings, 0 replies; 14+ messages in thread
From: Andi Kleen @ 2008-01-16 22:27 UTC (permalink / raw)
  To: mingo, tglx, linux-kernel


When the kernel panics early for some unrelated reason 
there would be eventually an early exception inside panic because 
clear_local_APIC tried to disable the not yet mapped APIC.
Check for that explicitely.

Signed-off-by: Andi Kleen <ak@suse.de>

---
 arch/x86/kernel/apic_32.c |   11 ++++++++---
 arch/x86/kernel/apic_64.c |    9 +++++++--
 2 files changed, 15 insertions(+), 5 deletions(-)

Index: linux/arch/x86/kernel/apic_32.c
===================================================================
--- linux.orig/arch/x86/kernel/apic_32.c
+++ linux/arch/x86/kernel/apic_32.c
@@ -99,6 +99,8 @@ static DEFINE_PER_CPU(struct clock_event
 /* Local APIC was disabled by the BIOS and enabled by the kernel */
 static int enabled_via_apicbase;
 
+static unsigned long apic_phys;
+
 /*
  * Get the LAPIC version
  */
@@ -616,9 +618,14 @@ int setup_profiling_timer(unsigned int m
  */
 void clear_local_APIC(void)
 {
-	int maxlvt = lapic_get_maxlvt();
+	int maxlvt;
 	u32 v;
 
+	/* APIC hasn't been mapped yet */
+	if (!apic_phys)
+		return;
+
+	maxlvt = lapic_get_maxlvt();
 	/*
 	 * Masking an LVT entry can trigger a local APIC error
 	 * if the vector is zero. Mask LVTERR first to prevent this.
@@ -1105,8 +1112,6 @@ no_apic:
  */
 void __init init_apic_mappings(void)
 {
-	unsigned long apic_phys;
-
 	/*
 	 * If no local APIC can be found then set up a fake all
 	 * zeroes page to simulate the local APIC and another
Index: linux/arch/x86/kernel/apic_64.c
===================================================================
--- linux.orig/arch/x86/kernel/apic_64.c
+++ linux/arch/x86/kernel/apic_64.c
@@ -81,6 +81,8 @@ static struct clock_event_device lapic_c
 };
 static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
 
+static unsigned long apic_phys;
+
 /*
  * Get the LAPIC version
  */
@@ -516,6 +518,11 @@ void clear_local_APIC(void)
 	int maxlvt = lapic_get_maxlvt();
 	u32 v;
 
+	/* APIC hasn't been mapped yet */
+	if (!apic_phys)
+		return;
+
+	maxlvt = lapic_get_maxlvt();
 	/*
 	 * Masking an LVT entry can trigger a local APIC error
 	 * if the vector is zero. Mask LVTERR first to prevent this.
@@ -850,8 +857,6 @@ static int __init detect_init_APIC(void)
  */
 void __init init_apic_mappings(void)
 {
-	unsigned long apic_phys;
-
 	/*
 	 * If no local APIC can be found then set up a fake all
 	 * zeroes page to simulate the local APIC and another

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] [3/7] Use shorter addresses in i386 segfault printks
  2008-01-16 22:27 ` [PATCH] [3/7] Use shorter addresses in i386 segfault printks Andi Kleen
@ 2008-01-17  2:58   ` Harvey Harrison
  2008-01-17  3:11     ` H. Peter Anvin
  2008-01-17 11:27     ` Andi Kleen
  0 siblings, 2 replies; 14+ messages in thread
From: Harvey Harrison @ 2008-01-17  2:58 UTC (permalink / raw)
  To: Andi Kleen; +Cc: mingo, tglx, linux-kernel, H. Peter Anvin

On Wed, 2008-01-16 at 23:27 +0100, Andi Kleen wrote:
> Signed-off-by: Andi Kleen <ak@suse.de>
> 
> ---
>  arch/x86/mm/fault_32.c |    2 +-

Could use exactly the same in fault_64.c

>  #ifdef CONFIG_X86_32
> -			"%s%s[%d]: segfault at %08lx ip %08lx sp %08lx error %lx\n",
> +			"%s%s[%d]: segfault at %lx ip %08lx sp %08lx error %lx\n",
>  #else
>  			"%s%s[%d]: segfault at %lx ip %lx sp %lx error %lx\n",
>  #endif

With the ongoing unification work, it would be nice if we could come
up with a way to unify printks like this.  Anyone have any bright ideas
on a format that will keep the current alignment on 32 and 64 bit with
the same syntax, or will these tiny ifdefs keep sprouting?

Harvey


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] [3/7] Use shorter addresses in i386 segfault printks
  2008-01-17  2:58   ` Harvey Harrison
@ 2008-01-17  3:11     ` H. Peter Anvin
  2008-01-17  3:22       ` Harvey Harrison
  2008-01-17 11:27     ` Andi Kleen
  1 sibling, 1 reply; 14+ messages in thread
From: H. Peter Anvin @ 2008-01-17  3:11 UTC (permalink / raw)
  To: Harvey Harrison; +Cc: Andi Kleen, mingo, tglx, linux-kernel

Harvey Harrison wrote:
> On Wed, 2008-01-16 at 23:27 +0100, Andi Kleen wrote:
>> Signed-off-by: Andi Kleen <ak@suse.de>
>>
>> ---
>>  arch/x86/mm/fault_32.c |    2 +-
> 
> Could use exactly the same in fault_64.c
> 
>>  #ifdef CONFIG_X86_32
>> -			"%s%s[%d]: segfault at %08lx ip %08lx sp %08lx error %lx\n",
>> +			"%s%s[%d]: segfault at %lx ip %08lx sp %08lx error %lx\n",
>>  #else
>>  			"%s%s[%d]: segfault at %lx ip %lx sp %lx error %lx\n",
>>  #endif
> 
> With the ongoing unification work, it would be nice if we could come
> up with a way to unify printks like this.  Anyone have any bright ideas
> on a format that will keep the current alignment on 32 and 64 bit with
> the same syntax, or will these tiny ifdefs keep sprouting?
> 

Casting to (void *) and using %p is probably your best bet.  That's what 
it really is anyway.

Note: in the kernel right now, %p doesn't have the leading 0x prefix, 
which it probably should...

	-hpa

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] [3/7] Use shorter addresses in i386 segfault printks
  2008-01-17  3:22       ` Harvey Harrison
@ 2008-01-17  3:21         ` H. Peter Anvin
  0 siblings, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2008-01-17  3:21 UTC (permalink / raw)
  To: Harvey Harrison; +Cc: Andi Kleen, mingo, tglx, linux-kernel

Harvey Harrison wrote:
>>>
>> Casting to (void *) and using %p is probably your best bet.  That's what 
>> it really is anyway.
>>
>> Note: in the kernel right now, %p doesn't have the leading 0x prefix, 
>> which it probably should...
> 
> Well, that won't exactly be the nicest looking solution in places, maybe
> a shorthand could be developed for this, or could another format
> specifier be added that implicitly does the (void *) cast? (%P perhaps)
> 

Not without losing the ability of gcc to type-check printk arguments.

	-hpa

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] [3/7] Use shorter addresses in i386 segfault printks
  2008-01-17  3:11     ` H. Peter Anvin
@ 2008-01-17  3:22       ` Harvey Harrison
  2008-01-17  3:21         ` H. Peter Anvin
  0 siblings, 1 reply; 14+ messages in thread
From: Harvey Harrison @ 2008-01-17  3:22 UTC (permalink / raw)
  To: H. Peter Anvin; +Cc: Andi Kleen, mingo, tglx, linux-kernel

On Wed, 2008-01-16 at 22:11 -0500, H. Peter Anvin wrote:
> Harvey Harrison wrote:
> > On Wed, 2008-01-16 at 23:27 +0100, Andi Kleen wrote:
> >> Signed-off-by: Andi Kleen <ak@suse.de>
> >>
> >> ---
> >>  arch/x86/mm/fault_32.c |    2 +-
> > 
> > Could use exactly the same in fault_64.c
> > 
> >>  #ifdef CONFIG_X86_32
> >> -			"%s%s[%d]: segfault at %08lx ip %08lx sp %08lx error %lx\n",
> >> +			"%s%s[%d]: segfault at %lx ip %08lx sp %08lx error %lx\n",
> >>  #else
> >>  			"%s%s[%d]: segfault at %lx ip %lx sp %lx error %lx\n",
> >>  #endif
> > 
> > With the ongoing unification work, it would be nice if we could come
> > up with a way to unify printks like this.  Anyone have any bright ideas
> > on a format that will keep the current alignment on 32 and 64 bit with
> > the same syntax, or will these tiny ifdefs keep sprouting?
> > 
> 
> Casting to (void *) and using %p is probably your best bet.  That's what 
> it really is anyway.
> 
> Note: in the kernel right now, %p doesn't have the leading 0x prefix, 
> which it probably should...

Well, that won't exactly be the nicest looking solution in places, maybe
a shorthand could be developed for this, or could another format
specifier be added that implicitly does the (void *) cast? (%P perhaps)

Harvey


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] [3/7] Use shorter addresses in i386 segfault printks
  2008-01-17  2:58   ` Harvey Harrison
  2008-01-17  3:11     ` H. Peter Anvin
@ 2008-01-17 11:27     ` Andi Kleen
  1 sibling, 0 replies; 14+ messages in thread
From: Andi Kleen @ 2008-01-17 11:27 UTC (permalink / raw)
  To: Harvey Harrison; +Cc: mingo, tglx, linux-kernel, H. Peter Anvin

On Thursday 17 January 2008 03:58:58 Harvey Harrison wrote:
> On Wed, 2008-01-16 at 23:27 +0100, Andi Kleen wrote:
> > Signed-off-by: Andi Kleen <ak@suse.de>
> >
> > ---
> >  arch/x86/mm/fault_32.c |    2 +-
>
> Could use exactly the same in fault_64.c

Hmm, I had this somewhere but it might have gotten lost during 
some merge step again.

-Andi

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] [0/7] Some random x86 patches that should all go into git-x86
  2008-01-16 22:27 [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Andi Kleen
                   ` (6 preceding siblings ...)
  2008-01-16 22:27 ` [PATCH] [7/7] Don't disable the APIC if it hasn't been mapped yet Andi Kleen
@ 2008-01-18  9:43 ` Ingo Molnar
  7 siblings, 0 replies; 14+ messages in thread
From: Ingo Molnar @ 2008-01-18  9:43 UTC (permalink / raw)
  To: Andi Kleen; +Cc: tglx, linux-kernel


* Andi Kleen <ak@suse.de> wrote:

> Some are reposts, some are not. See patch descriptions for details. I 
> believe I addressed all feedback that made sense in the reposted 
> patches.

thanks Andi, i've picked them up.

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2008-01-18  9:44 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-01-16 22:27 [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Andi Kleen
2008-01-16 22:27 ` [PATCH] [1/7] i386: Move MWAIT idle check to generic CPU initialization Andi Kleen
2008-01-16 22:27 ` [PATCH] [2/7] Use the correct cpuid method to detect MWAIT support for C states Andi Kleen
2008-01-16 22:27 ` [PATCH] [3/7] Use shorter addresses in i386 segfault printks Andi Kleen
2008-01-17  2:58   ` Harvey Harrison
2008-01-17  3:11     ` H. Peter Anvin
2008-01-17  3:22       ` Harvey Harrison
2008-01-17  3:21         ` H. Peter Anvin
2008-01-17 11:27     ` Andi Kleen
2008-01-16 22:27 ` [PATCH] [4/7] Print which shared library/executable faulted in segfault etc. messages Andi Kleen
2008-01-16 22:27 ` [PATCH] [5/7] Replace hard coded reservations in x86-64 early boot code with dynamic table v2 Andi Kleen
2008-01-16 22:27 ` [PATCH] [6/7] Optimize lock prefix switching to run less frequently v2 Andi Kleen
2008-01-16 22:27 ` [PATCH] [7/7] Don't disable the APIC if it hasn't been mapped yet Andi Kleen
2008-01-18  9:43 ` [PATCH] [0/7] Some random x86 patches that should all go into git-x86 Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).