linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
@ 2011-08-03 13:31 Andy Lutomirski
  2011-08-03 13:31 ` [PATCH v2 1/6] x86-64: Pad vDSO to a page boundary Andy Lutomirski
                   ` (7 more replies)
  0 siblings, 8 replies; 20+ messages in thread
From: Andy Lutomirski @ 2011-08-03 13:31 UTC (permalink / raw)
  To: x86, Konrad Rzeszutek Wilk
  Cc: Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization, Andy Lutomirski

This fixes various problems that cropped up with the vdso patches.

 - Patch 1 fixes an information leak to userspace.
 - Patches 2 and 3 fix the kernel build on gold.
 - Patches 4 and 5 fix Xen (I hope).
 - Patch 6 (optional) adds a trace event to vsyscall emulation.  It will
   make it easier to handle performance regression reports :)

[1] https://gitorious.org/linux-test-utils/linux-clock-tests

Changes from v1:
 - Improve changelog message for "x86-64/xen: Enable the vvar mapping"
 - Fix 32-bit build.
 - Add patch 6.

Andy Lutomirski (6):
  x86-64: Pad vDSO to a page boundary
  x86-64: Move the "user" vsyscall segment out of the data segment.
  x86-64: Work around gold bug 13023
  x86-64/xen: Enable the vvar mapping
  x86-64: Add user_64bit_mode paravirt op
  x86-64: Add vsyscall:emulate_vsyscall trace event

 arch/x86/include/asm/desc.h           |    4 +-
 arch/x86/include/asm/paravirt_types.h |    6 ++++
 arch/x86/include/asm/ptrace.h         |   19 +++++++++++++
 arch/x86/kernel/paravirt.c            |    4 +++
 arch/x86/kernel/step.c                |    2 +-
 arch/x86/kernel/vmlinux.lds.S         |   46 ++++++++++++++++++---------------
 arch/x86/kernel/vsyscall_64.c         |   12 +++++---
 arch/x86/kernel/vsyscall_trace.h      |   29 ++++++++++++++++++++
 arch/x86/mm/fault.c                   |    2 +-
 arch/x86/vdso/vdso.S                  |    1 +
 arch/x86/xen/enlighten.c              |    4 +++
 arch/x86/xen/mmu.c                    |    4 ++-
 12 files changed, 102 insertions(+), 31 deletions(-)
 create mode 100644 arch/x86/kernel/vsyscall_trace.h

-- 
1.7.6


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 1/6] x86-64: Pad vDSO to a page boundary
  2011-08-03 13:31 [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Andy Lutomirski
@ 2011-08-03 13:31 ` Andy Lutomirski
  2011-08-05  5:37   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
  2011-08-03 13:31 ` [PATCH v2 2/6] x86-64: Move the "user" vsyscall segment out of the data segment Andy Lutomirski
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: Andy Lutomirski @ 2011-08-03 13:31 UTC (permalink / raw)
  To: x86, Konrad Rzeszutek Wilk
  Cc: Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization, Andy Lutomirski

This avoids an information leak to userspace.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
---
 arch/x86/vdso/vdso.S |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/vdso/vdso.S b/arch/x86/vdso/vdso.S
index 1b979c1..01f5e3b 100644
--- a/arch/x86/vdso/vdso.S
+++ b/arch/x86/vdso/vdso.S
@@ -9,6 +9,7 @@ __PAGE_ALIGNED_DATA
 vdso_start:
 	.incbin "arch/x86/vdso/vdso.so"
 vdso_end:
+	.align PAGE_SIZE /* extra data here leaks to userspace. */
 
 .previous
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/6] x86-64: Move the "user" vsyscall segment out of the data segment.
  2011-08-03 13:31 [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Andy Lutomirski
  2011-08-03 13:31 ` [PATCH v2 1/6] x86-64: Pad vDSO to a page boundary Andy Lutomirski
@ 2011-08-03 13:31 ` Andy Lutomirski
  2011-08-05  5:37   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
  2011-08-03 13:31 ` [PATCH v2 3/6] x86-64: Work around gold bug 13023 Andy Lutomirski
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: Andy Lutomirski @ 2011-08-03 13:31 UTC (permalink / raw)
  To: x86, Konrad Rzeszutek Wilk
  Cc: Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization, Andy Lutomirski

The kernel's loader doesn't seem to care, but gold complains.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Reported-by: Arkadiusz Miskiewicz <a.miskiewicz@gmail.com>
---
 arch/x86/kernel/vmlinux.lds.S |   36 ++++++++++++++++++------------------
 1 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 4aa9c54..e79fb39 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -154,6 +154,24 @@ SECTIONS
 
 #ifdef CONFIG_X86_64
 
+	. = ALIGN(PAGE_SIZE);
+	__vvar_page = .;
+
+	.vvar : AT(ADDR(.vvar) - LOAD_OFFSET) {
+
+	      /* Place all vvars at the offsets in asm/vvar.h. */
+#define EMIT_VVAR(name, offset) 		\
+		. = offset;		\
+		*(.vvar_ ## name)
+#define __VVAR_KERNEL_LDS
+#include <asm/vvar.h>
+#undef __VVAR_KERNEL_LDS
+#undef EMIT_VVAR
+
+	} :data
+
+       . = ALIGN(__vvar_page + PAGE_SIZE, PAGE_SIZE);
+
 #define VSYSCALL_ADDR (-10*1024*1024)
 
 #define VLOAD_OFFSET (VSYSCALL_ADDR - __vsyscall_0 + LOAD_OFFSET)
@@ -162,7 +180,6 @@ SECTIONS
 #define VVIRT_OFFSET (VSYSCALL_ADDR - __vsyscall_0)
 #define VVIRT(x) (ADDR(x) - VVIRT_OFFSET)
 
-	. = ALIGN(4096);
 	__vsyscall_0 = .;
 
 	. = VSYSCALL_ADDR;
@@ -185,23 +202,6 @@ SECTIONS
 #undef VVIRT_OFFSET
 #undef VVIRT
 
-	__vvar_page = .;
-
-	.vvar : AT(ADDR(.vvar) - LOAD_OFFSET) {
-
-	      /* Place all vvars at the offsets in asm/vvar.h. */
-#define EMIT_VVAR(name, offset) 		\
-		. = offset;		\
-		*(.vvar_ ## name)
-#define __VVAR_KERNEL_LDS
-#include <asm/vvar.h>
-#undef __VVAR_KERNEL_LDS
-#undef EMIT_VVAR
-
-	} :data
-
-       . = ALIGN(__vvar_page + PAGE_SIZE, PAGE_SIZE);
-
 #endif /* CONFIG_X86_64 */
 
 	/* Init code and data - will be freed after init */
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/6] x86-64: Work around gold bug 13023
  2011-08-03 13:31 [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Andy Lutomirski
  2011-08-03 13:31 ` [PATCH v2 1/6] x86-64: Pad vDSO to a page boundary Andy Lutomirski
  2011-08-03 13:31 ` [PATCH v2 2/6] x86-64: Move the "user" vsyscall segment out of the data segment Andy Lutomirski
@ 2011-08-03 13:31 ` Andy Lutomirski
  2011-08-05  5:38   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
  2011-08-03 13:31 ` [PATCH v2 4/6] x86-64/xen: Enable the vvar mapping Andy Lutomirski
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: Andy Lutomirski @ 2011-08-03 13:31 UTC (permalink / raw)
  To: x86, Konrad Rzeszutek Wilk
  Cc: Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization, Andy Lutomirski

Gold has trouble assigning numbers to the location counter inside of
an output section description.  The bug was triggered by
9fd67b4ed0714ab718f1f9bd14c344af336a6df7, which consolidated all of
the vsyscall sections into a single section.  The workaround is IMO
still nicer than the old way of doing it.

This produces an apparently valid kernel image and passes my vdso
tests on both GNU ld version 2.21.51.0.6-2.fc15 20110118 and GNU
gold (version 2.21.51.0.6-2.fc15 20110118) 1.10 as distributed by
Fedora 15.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Reported-by: Arkadiusz Miskiewicz <a.miskiewicz@gmail.com>
---
 arch/x86/kernel/vmlinux.lds.S |   16 ++++++++++------
 1 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index e79fb39..8f3a265 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -158,10 +158,12 @@ SECTIONS
 	__vvar_page = .;
 
 	.vvar : AT(ADDR(.vvar) - LOAD_OFFSET) {
+		/* work around gold bug 13023 */
+		__vvar_beginning_hack = .;
 
-	      /* Place all vvars at the offsets in asm/vvar.h. */
-#define EMIT_VVAR(name, offset) 		\
-		. = offset;		\
+		/* Place all vvars at the offsets in asm/vvar.h. */
+#define EMIT_VVAR(name, offset) 			\
+		. = __vvar_beginning_hack + offset;	\
 		*(.vvar_ ## name)
 #define __VVAR_KERNEL_LDS
 #include <asm/vvar.h>
@@ -184,15 +186,17 @@ SECTIONS
 
 	. = VSYSCALL_ADDR;
 	.vsyscall : AT(VLOAD(.vsyscall)) {
+		/* work around gold bug 13023 */
+		__vsyscall_beginning_hack = .;
 		*(.vsyscall_0)
 
-		. = 1024;
+		. = __vsyscall_beginning_hack + 1024;
 		*(.vsyscall_1)
 
-		. = 2048;
+		. = __vsyscall_beginning_hack + 2048;
 		*(.vsyscall_2)
 
-		. = 4096;  /* Pad the whole page. */
+		. = __vsyscall_beginning_hack + 4096;  /* Pad the whole page. */
 	} :user =0xcc
 	. = ALIGN(__vsyscall_0 + PAGE_SIZE, PAGE_SIZE);
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/6] x86-64/xen: Enable the vvar mapping
  2011-08-03 13:31 [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Andy Lutomirski
                   ` (2 preceding siblings ...)
  2011-08-03 13:31 ` [PATCH v2 3/6] x86-64: Work around gold bug 13023 Andy Lutomirski
@ 2011-08-03 13:31 ` Andy Lutomirski
  2011-08-03 13:49   ` Konrad Rzeszutek Wilk
  2011-08-05  5:38   ` [tip:x86/vdso] x86-64, xen: " tip-bot for Andy Lutomirski
  2011-08-03 13:31 ` [PATCH v2 5/6] x86-64: Add user_64bit_mode paravirt op Andy Lutomirski
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 20+ messages in thread
From: Andy Lutomirski @ 2011-08-03 13:31 UTC (permalink / raw)
  To: x86, Konrad Rzeszutek Wilk
  Cc: Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization, Andy Lutomirski

Xen needs to handle VVAR_PAGE, introduced in git commit:
9fd67b4ed0714ab718f1f9bd14c344af336a6df7
x86-64: Give vvars their own page

Otherwise we die during bootup with a message like:

(XEN) mm.c:940:d10 Error getting mfn 1888 (pfn 1e3e48) from L1 entry
      8000000001888465 for l1e_owner=10, pg_owner=10
(XEN) mm.c:5049:d10 ptwr_emulate: could not get_page_from_l1e()
[    0.000000] BUG: unable to handle kernel NULL pointer dereference at (null)
[    0.000000] IP: [<ffffffff8103a930>] xen_set_pte+0x20/0xe0

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index f987bde..8cce339 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1916,6 +1916,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 # endif
 #else
 	case VSYSCALL_LAST_PAGE ... VSYSCALL_FIRST_PAGE:
+	case VVAR_PAGE:
 #endif
 	case FIX_TEXT_POKE0:
 	case FIX_TEXT_POKE1:
@@ -1956,7 +1957,8 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 #ifdef CONFIG_X86_64
 	/* Replicate changes to map the vsyscall page into the user
 	   pagetable vsyscall mapping. */
-	if (idx >= VSYSCALL_LAST_PAGE && idx <= VSYSCALL_FIRST_PAGE) {
+	if ((idx >= VSYSCALL_LAST_PAGE && idx <= VSYSCALL_FIRST_PAGE) ||
+	    idx == VVAR_PAGE) {
 		unsigned long vaddr = __fix_to_virt(idx);
 		set_pte_vaddr_pud(level3_user_vsyscall, vaddr, pte);
 	}
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 5/6] x86-64: Add user_64bit_mode paravirt op
  2011-08-03 13:31 [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Andy Lutomirski
                   ` (3 preceding siblings ...)
  2011-08-03 13:31 ` [PATCH v2 4/6] x86-64/xen: Enable the vvar mapping Andy Lutomirski
@ 2011-08-03 13:31 ` Andy Lutomirski
  2011-08-05  5:39   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
  2011-08-03 13:31 ` [PATCH v2 6/6] x86-64: Add vsyscall:emulate_vsyscall trace event Andy Lutomirski
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: Andy Lutomirski @ 2011-08-03 13:31 UTC (permalink / raw)
  To: x86, Konrad Rzeszutek Wilk
  Cc: Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization, Andy Lutomirski

Three places in the kernel assume that the only long mode CPL 3
selector is __USER_CS.  This is not true on Xen -- Xen's sysretq
changes cs to the magic value 0xe033.

Two of the places are corner cases, but as of "x86-64: Improve
vsyscall emulation CS and RIP handling"
(c9712944b2a12373cb6ff8059afcfb7e826a6c54), vsyscalls will segfault
if called with Xen's extra CS selector.  This causes a panic when
older init builds die.

It seems impossible to make Xen use __USER_CS reliably without
taking a performance hit on every system call, so this fixes the
tests instead with a new paravirt op.  It's a little ugly because
ptrace.h can't include paravirt.h.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/include/asm/desc.h           |    4 ++--
 arch/x86/include/asm/paravirt_types.h |    6 ++++++
 arch/x86/include/asm/ptrace.h         |   19 +++++++++++++++++++
 arch/x86/kernel/paravirt.c            |    4 ++++
 arch/x86/kernel/step.c                |    2 +-
 arch/x86/kernel/vsyscall_64.c         |    6 +-----
 arch/x86/mm/fault.c                   |    2 +-
 arch/x86/xen/enlighten.c              |    4 ++++
 8 files changed, 38 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h
index 7b439d9..41935fa 100644
--- a/arch/x86/include/asm/desc.h
+++ b/arch/x86/include/asm/desc.h
@@ -27,8 +27,8 @@ static inline void fill_ldt(struct desc_struct *desc, const struct user_desc *in
 
 	desc->base2		= (info->base_addr & 0xff000000) >> 24;
 	/*
-	 * Don't allow setting of the lm bit. It is useless anyway
-	 * because 64bit system calls require __USER_CS:
+	 * Don't allow setting of the lm bit. It would confuse
+	 * user_64bit_mode and would get overridden by sysret anyway.
 	 */
 	desc->l			= 0;
 }
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 2c76521..8e8b9a4 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -41,6 +41,7 @@
 
 #include <asm/desc_defs.h>
 #include <asm/kmap_types.h>
+#include <asm/pgtable_types.h>
 
 struct page;
 struct thread_struct;
@@ -63,6 +64,11 @@ struct paravirt_callee_save {
 struct pv_info {
 	unsigned int kernel_rpl;
 	int shared_kernel_pmd;
+
+#ifdef CONFIG_X86_64
+	u16 extra_user_64bit_cs;  /* __USER_CS if none */
+#endif
+
 	int paravirt_enabled;
 	const char *name;
 };
diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
index 94e7618..3566454 100644
--- a/arch/x86/include/asm/ptrace.h
+++ b/arch/x86/include/asm/ptrace.h
@@ -131,6 +131,9 @@ struct pt_regs {
 #ifdef __KERNEL__
 
 #include <linux/init.h>
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#endif
 
 struct cpuinfo_x86;
 struct task_struct;
@@ -187,6 +190,22 @@ static inline int v8086_mode(struct pt_regs *regs)
 #endif
 }
 
+#ifdef CONFIG_X86_64
+static inline bool user_64bit_mode(struct pt_regs *regs)
+{
+#ifndef CONFIG_PARAVIRT
+	/*
+	 * On non-paravirt systems, this is the only long mode CPL 3
+	 * selector.  We do not allow long mode selectors in the LDT.
+	 */
+	return regs->cs == __USER_CS;
+#else
+	/* Headers are too twisted for this to go in paravirt.h. */
+	return regs->cs == __USER_CS || regs->cs == pv_info.extra_user_64bit_cs;
+#endif
+}
+#endif
+
 /*
  * X86_32 CPUs don't save ss and esp if the CPU is already in kernel mode
  * when it traps.  The previous stack will be directly underneath the saved
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 613a793..d90272e 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -307,6 +307,10 @@ struct pv_info pv_info = {
 	.paravirt_enabled = 0,
 	.kernel_rpl = 0,
 	.shared_kernel_pmd = 1,	/* Only used when CONFIG_X86_PAE is set */
+
+#ifdef CONFIG_X86_64
+	.extra_user_64bit_cs = __USER_CS,
+#endif
 };
 
 struct pv_init_ops pv_init_ops = {
diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c
index 7977f0c..c346d11 100644
--- a/arch/x86/kernel/step.c
+++ b/arch/x86/kernel/step.c
@@ -74,7 +74,7 @@ static int is_setting_trap_flag(struct task_struct *child, struct pt_regs *regs)
 
 #ifdef CONFIG_X86_64
 		case 0x40 ... 0x4f:
-			if (regs->cs != __USER_CS)
+			if (!user_64bit_mode(regs))
 				/* 32-bit mode: register increment */
 				return 0;
 			/* 64-bit mode: REX prefix */
diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
index dda7dff..1725930 100644
--- a/arch/x86/kernel/vsyscall_64.c
+++ b/arch/x86/kernel/vsyscall_64.c
@@ -127,11 +127,7 @@ void dotraplinkage do_emulate_vsyscall(struct pt_regs *regs, long error_code)
 
 	local_irq_enable();
 
-	/*
-	 * Real 64-bit user mode code has cs == __USER_CS.  Anything else
-	 * is bogus.
-	 */
-	if (regs->cs != __USER_CS) {
+	if (!user_64bit_mode(regs)) {
 		/*
 		 * If we trapped from kernel mode, we might as well OOPS now
 		 * instead of returning to some random address and OOPSing
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 4d09df0..decd51a 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -105,7 +105,7 @@ check_prefetch_opcode(struct pt_regs *regs, unsigned char *instr,
 		 * but for now it's good enough to assume that long
 		 * mode only uses well known segments or kernel.
 		 */
-		return (!user_mode(regs)) || (regs->cs == __USER_CS);
+		return (!user_mode(regs) || user_64bit_mode(regs));
 #endif
 	case 0x60:
 		/* 0x64 thru 0x67 are valid prefixes in all modes. */
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 974a528..e2345af 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -951,6 +951,10 @@ static const struct pv_info xen_info __initconst = {
 	.paravirt_enabled = 1,
 	.shared_kernel_pmd = 0,
 
+#ifdef CONFIG_X86_64
+	.extra_user_64bit_cs = FLAT_USER_CS64,
+#endif
+
 	.name = "Xen",
 };
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 6/6] x86-64: Add vsyscall:emulate_vsyscall trace event
  2011-08-03 13:31 [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Andy Lutomirski
                   ` (4 preceding siblings ...)
  2011-08-03 13:31 ` [PATCH v2 5/6] x86-64: Add user_64bit_mode paravirt op Andy Lutomirski
@ 2011-08-03 13:31 ` Andy Lutomirski
  2011-08-05  5:39   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
  2011-08-03 13:53 ` [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Konrad Rzeszutek Wilk
  2011-08-03 17:34 ` [Xen-devel] " Sander Eikelenboom
  7 siblings, 1 reply; 20+ messages in thread
From: Andy Lutomirski @ 2011-08-03 13:31 UTC (permalink / raw)
  To: x86, Konrad Rzeszutek Wilk
  Cc: Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization, Andy Lutomirski

Vsyscall emulation is slow, so make it easy to track down.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
---
 arch/x86/kernel/vsyscall_64.c    |    6 ++++++
 arch/x86/kernel/vsyscall_trace.h |   29 +++++++++++++++++++++++++++++
 2 files changed, 35 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/kernel/vsyscall_trace.h

diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
index 1725930..93a0d46 100644
--- a/arch/x86/kernel/vsyscall_64.c
+++ b/arch/x86/kernel/vsyscall_64.c
@@ -50,6 +50,9 @@
 #include <asm/vgtod.h>
 #include <asm/traps.h>
 
+#define CREATE_TRACE_POINTS
+#include "vsyscall_trace.h"
+
 DEFINE_VVAR(int, vgetcpu_mode);
 DEFINE_VVAR(struct vsyscall_gtod_data, vsyscall_gtod_data) =
 {
@@ -146,6 +149,9 @@ void dotraplinkage do_emulate_vsyscall(struct pt_regs *regs, long error_code)
 	 * and int 0xcc is two bytes long.
 	 */
 	vsyscall_nr = addr_to_vsyscall_nr(regs->ip - 2);
+
+	trace_emulate_vsyscall(vsyscall_nr);
+
 	if (vsyscall_nr < 0) {
 		warn_bad_vsyscall(KERN_WARNING, regs,
 				  "illegal int 0xcc (exploit attempt?)");
diff --git a/arch/x86/kernel/vsyscall_trace.h b/arch/x86/kernel/vsyscall_trace.h
new file mode 100644
index 0000000..a8b2ede
--- /dev/null
+++ b/arch/x86/kernel/vsyscall_trace.h
@@ -0,0 +1,29 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM vsyscall
+
+#if !defined(__VSYSCALL_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define __VSYSCALL_TRACE_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(emulate_vsyscall,
+
+	    TP_PROTO(int nr),
+
+	    TP_ARGS(nr),
+
+	    TP_STRUCT__entry(__field(int, nr)),
+
+	    TP_fast_assign(
+			   __entry->nr = nr;
+			   ),
+
+	    TP_printk("nr = %d", __entry->nr)
+);
+
+#endif
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH ../../arch/x86/kernel
+#define TRACE_INCLUDE_FILE vsyscall_trace
+#include <trace/define_trace.h>
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/6] x86-64/xen: Enable the vvar mapping
  2011-08-03 13:31 ` [PATCH v2 4/6] x86-64/xen: Enable the vvar mapping Andy Lutomirski
@ 2011-08-03 13:49   ` Konrad Rzeszutek Wilk
  2011-08-05  5:38   ` [tip:x86/vdso] x86-64, xen: " tip-bot for Andy Lutomirski
  1 sibling, 0 replies; 20+ messages in thread
From: Konrad Rzeszutek Wilk @ 2011-08-03 13:49 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: x86, Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization

On Wed, Aug 03, 2011 at 09:31:52AM -0400, Andy Lutomirski wrote:
> Xen needs to handle VVAR_PAGE, introduced in git commit:
> 9fd67b4ed0714ab718f1f9bd14c344af336a6df7
> x86-64: Give vvars their own page
> 
> Otherwise we die during bootup with a message like:
> 
> (XEN) mm.c:940:d10 Error getting mfn 1888 (pfn 1e3e48) from L1 entry
>       8000000001888465 for l1e_owner=10, pg_owner=10
> (XEN) mm.c:5049:d10 ptwr_emulate: could not get_page_from_l1e()
> [    0.000000] BUG: unable to handle kernel NULL pointer dereference at (null)
> [    0.000000] IP: [<ffffffff8103a930>] xen_set_pte+0x20/0xe0
> 
> Signed-off-by: Andy Lutomirski <luto@mit.edu>
> Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c |    4 +++-
>  1 files changed, 3 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index f987bde..8cce339 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1916,6 +1916,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
>  # endif
>  #else
>  	case VSYSCALL_LAST_PAGE ... VSYSCALL_FIRST_PAGE:
> +	case VVAR_PAGE:
>  #endif
>  	case FIX_TEXT_POKE0:
>  	case FIX_TEXT_POKE1:
> @@ -1956,7 +1957,8 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
>  #ifdef CONFIG_X86_64
>  	/* Replicate changes to map the vsyscall page into the user
>  	   pagetable vsyscall mapping. */
> -	if (idx >= VSYSCALL_LAST_PAGE && idx <= VSYSCALL_FIRST_PAGE) {
> +	if ((idx >= VSYSCALL_LAST_PAGE && idx <= VSYSCALL_FIRST_PAGE) ||
> +	    idx == VVAR_PAGE) {
>  		unsigned long vaddr = __fix_to_virt(idx);
>  		set_pte_vaddr_pud(level3_user_vsyscall, vaddr, pte);
>  	}
> -- 
> 1.7.6

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
  2011-08-03 13:31 [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Andy Lutomirski
                   ` (5 preceding siblings ...)
  2011-08-03 13:31 ` [PATCH v2 6/6] x86-64: Add vsyscall:emulate_vsyscall trace event Andy Lutomirski
@ 2011-08-03 13:53 ` Konrad Rzeszutek Wilk
  2011-08-03 13:56   ` Konrad Rzeszutek Wilk
  2011-08-03 17:34 ` [Xen-devel] " Sander Eikelenboom
  7 siblings, 1 reply; 20+ messages in thread
From: Konrad Rzeszutek Wilk @ 2011-08-03 13:53 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: x86, Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization

On Wed, Aug 03, 2011 at 09:31:48AM -0400, Andy Lutomirski wrote:
> This fixes various problems that cropped up with the vdso patches.
> 
>  - Patch 1 fixes an information leak to userspace.
>  - Patches 2 and 3 fix the kernel build on gold.
>  - Patches 4 and 5 fix Xen (I hope).
>  - Patch 6 (optional) adds a trace event to vsyscall emulation.  It will
>    make it easier to handle performance regression reports :)

Hm, you seemed to have the x86 maintainers on your email..
> 
> [1] https://gitorious.org/linux-test-utils/linux-clock-tests
> 
> Changes from v1:
>  - Improve changelog message for "x86-64/xen: Enable the vvar mapping"
>  - Fix 32-bit build.
>  - Add patch 6.
> 
> Andy Lutomirski (6):
>   x86-64: Pad vDSO to a page boundary
>   x86-64: Move the "user" vsyscall segment out of the data segment.
>   x86-64: Work around gold bug 13023
>   x86-64/xen: Enable the vvar mapping
>   x86-64: Add user_64bit_mode paravirt op
>   x86-64: Add vsyscall:emulate_vsyscall trace event
> 
>  arch/x86/include/asm/desc.h           |    4 +-
>  arch/x86/include/asm/paravirt_types.h |    6 ++++
>  arch/x86/include/asm/ptrace.h         |   19 +++++++++++++
>  arch/x86/kernel/paravirt.c            |    4 +++
>  arch/x86/kernel/step.c                |    2 +-
>  arch/x86/kernel/vmlinux.lds.S         |   46 ++++++++++++++++++---------------
>  arch/x86/kernel/vsyscall_64.c         |   12 +++++---
>  arch/x86/kernel/vsyscall_trace.h      |   29 ++++++++++++++++++++
>  arch/x86/mm/fault.c                   |    2 +-
>  arch/x86/vdso/vdso.S                  |    1 +
>  arch/x86/xen/enlighten.c              |    4 +++
>  arch/x86/xen/mmu.c                    |    4 ++-
>  12 files changed, 102 insertions(+), 31 deletions(-)
>  create mode 100644 arch/x86/kernel/vsyscall_trace.h
> 
> -- 
> 1.7.6

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
  2011-08-03 13:53 ` [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Konrad Rzeszutek Wilk
@ 2011-08-03 13:56   ` Konrad Rzeszutek Wilk
  2011-08-03 13:59     ` Andrew Lutomirski
  0 siblings, 1 reply; 20+ messages in thread
From: Konrad Rzeszutek Wilk @ 2011-08-03 13:56 UTC (permalink / raw)
  To: Andy Lutomirski, mingo, hpa, x86
  Cc: x86, Linux Kernel Mailing List, jeremy, keir.xen, xen-devel,
	virtualization

On Wed, Aug 03, 2011 at 09:53:22AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 03, 2011 at 09:31:48AM -0400, Andy Lutomirski wrote:
> > This fixes various problems that cropped up with the vdso patches.
> > 
> >  - Patch 1 fixes an information leak to userspace.
> >  - Patches 2 and 3 fix the kernel build on gold.
> >  - Patches 4 and 5 fix Xen (I hope).
> >  - Patch 6 (optional) adds a trace event to vsyscall emulation.  It will
> >    make it easier to handle performance regression reports :)
> 
> Hm, you seemed to have the x86 maintainers on your email..

I definitly need some coffee. What I meant was that you missing putting
the x86 maintainers on this patchset. They are the ones that will handle this.

I put them on the To list for you.
> > 
> > [1] https://gitorious.org/linux-test-utils/linux-clock-tests
> > 
> > Changes from v1:
> >  - Improve changelog message for "x86-64/xen: Enable the vvar mapping"
> >  - Fix 32-bit build.
> >  - Add patch 6.
> > 
> > Andy Lutomirski (6):
> >   x86-64: Pad vDSO to a page boundary
> >   x86-64: Move the "user" vsyscall segment out of the data segment.
> >   x86-64: Work around gold bug 13023
> >   x86-64/xen: Enable the vvar mapping
> >   x86-64: Add user_64bit_mode paravirt op
> >   x86-64: Add vsyscall:emulate_vsyscall trace event
> > 
> >  arch/x86/include/asm/desc.h           |    4 +-
> >  arch/x86/include/asm/paravirt_types.h |    6 ++++
> >  arch/x86/include/asm/ptrace.h         |   19 +++++++++++++
> >  arch/x86/kernel/paravirt.c            |    4 +++
> >  arch/x86/kernel/step.c                |    2 +-
> >  arch/x86/kernel/vmlinux.lds.S         |   46 ++++++++++++++++++---------------
> >  arch/x86/kernel/vsyscall_64.c         |   12 +++++---
> >  arch/x86/kernel/vsyscall_trace.h      |   29 ++++++++++++++++++++
> >  arch/x86/mm/fault.c                   |    2 +-
> >  arch/x86/vdso/vdso.S                  |    1 +
> >  arch/x86/xen/enlighten.c              |    4 +++
> >  arch/x86/xen/mmu.c                    |    4 ++-
> >  12 files changed, 102 insertions(+), 31 deletions(-)
> >  create mode 100644 arch/x86/kernel/vsyscall_trace.h
> > 
> > -- 
> > 1.7.6

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
  2011-08-03 13:56   ` Konrad Rzeszutek Wilk
@ 2011-08-03 13:59     ` Andrew Lutomirski
  2011-08-11 20:28       ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 20+ messages in thread
From: Andrew Lutomirski @ 2011-08-03 13:59 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: mingo, hpa, x86, Linux Kernel Mailing List, jeremy, keir.xen,
	xen-devel, virtualization

On Wed, Aug 3, 2011 at 9:56 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Wed, Aug 03, 2011 at 09:53:22AM -0400, Konrad Rzeszutek Wilk wrote:
>> On Wed, Aug 03, 2011 at 09:31:48AM -0400, Andy Lutomirski wrote:
>> > This fixes various problems that cropped up with the vdso patches.
>> >
>> >  - Patch 1 fixes an information leak to userspace.
>> >  - Patches 2 and 3 fix the kernel build on gold.
>> >  - Patches 4 and 5 fix Xen (I hope).
>> >  - Patch 6 (optional) adds a trace event to vsyscall emulation.  It will
>> >    make it easier to handle performance regression reports :)
>>
>> Hm, you seemed to have the x86 maintainers on your email..
>
> I definitly need some coffee. What I meant was that you missing putting
> the x86 maintainers on this patchset. They are the ones that will handle this.
>
> I put them on the To list for you.

Are you sure about that coffee?  I'm pretty sure I had x86@kernel.org in there.

--Andy

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
  2011-08-03 13:31 [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Andy Lutomirski
                   ` (6 preceding siblings ...)
  2011-08-03 13:53 ` [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Konrad Rzeszutek Wilk
@ 2011-08-03 17:34 ` Sander Eikelenboom
  7 siblings, 0 replies; 20+ messages in thread
From: Sander Eikelenboom @ 2011-08-03 17:34 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: x86, Konrad Rzeszutek Wilk, jeremy, xen-devel,
	Linux Kernel Mailing List, virtualization, keir.xen

Hello Andy,

Wednesday, August 3, 2011, 3:31:48 PM, you wrote:

> This fixes various problems that cropped up with the vdso patches.

>  - Patch 1 fixes an information leak to userspace.
>  - Patches 2 and 3 fix the kernel build on gold.
>  - Patches 4 and 5 fix Xen (I hope).
>  - Patch 6 (optional) adds a trace event to vsyscall emulation.  It will
>    make it easier to handle performance regression reports :)

> [1] https://gitorious.org/linux-test-utils/linux-clock-tests

> Changes from v1:
>  - Improve changelog message for "x86-64/xen: Enable the vvar mapping"
>  - Fix 32-bit build.
>  - Add patch 6.

> Andy Lutomirski (6):
>   x86-64: Pad vDSO to a page boundary
>   x86-64: Move the "user" vsyscall segment out of the data segment.
>   x86-64: Work around gold bug 13023
>   x86-64/xen: Enable the vvar mapping
>   x86-64: Add user_64bit_mode paravirt op
>   x86-64: Add vsyscall:emulate_vsyscall trace event

>  arch/x86/include/asm/desc.h           |    4 +-
>  arch/x86/include/asm/paravirt_types.h |    6 ++++
>  arch/x86/include/asm/ptrace.h         |   19 +++++++++++++
>  arch/x86/kernel/paravirt.c            |    4 +++
>  arch/x86/kernel/step.c                |    2 +-
>  arch/x86/kernel/vmlinux.lds.S         |   46 ++++++++++++++++++---------------
>  arch/x86/kernel/vsyscall_64.c         |   12 +++++---
>  arch/x86/kernel/vsyscall_trace.h      |   29 ++++++++++++++++++++
>  arch/x86/mm/fault.c                   |    2 +-
>  arch/x86/vdso/vdso.S                  |    1 +
>  arch/x86/xen/enlighten.c              |    4 +++
>  arch/x86/xen/mmu.c                    |    4 ++-
>  12 files changed, 102 insertions(+), 31 deletions(-)
>  create mode 100644 arch/x86/kernel/vsyscall_trace.h

Compile and boot tested on Xen 4.1.1 and baremetal on x86_64, it fixes my boot panic under Xen.

Tested-by: Sander Eikelenboom <linux@eikelenboom.it>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [tip:x86/vdso] x86-64: Pad vDSO to a page boundary
  2011-08-03 13:31 ` [PATCH v2 1/6] x86-64: Pad vDSO to a page boundary Andy Lutomirski
@ 2011-08-05  5:37   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Andy Lutomirski @ 2011-08-05  5:37 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, luto, tglx, hpa, luto

Commit-ID:  1bdfac19b3ecfca545281c15c7aea7ebc2eaef31
Gitweb:     http://git.kernel.org/tip/1bdfac19b3ecfca545281c15c7aea7ebc2eaef31
Author:     Andy Lutomirski <luto@MIT.EDU>
AuthorDate: Wed, 3 Aug 2011 09:31:49 -0400
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Thu, 4 Aug 2011 16:13:34 -0700

x86-64: Pad vDSO to a page boundary

This avoids an information leak to userspace.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Link: http://lkml.kernel.org/r/a63380a3c58a0506a2f5a18ba1b12dbde1f25e58.1312378163.git.luto@mit.edu
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
---
 arch/x86/vdso/vdso.S |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/vdso/vdso.S b/arch/x86/vdso/vdso.S
index 1b979c1..01f5e3b 100644
--- a/arch/x86/vdso/vdso.S
+++ b/arch/x86/vdso/vdso.S
@@ -9,6 +9,7 @@ __PAGE_ALIGNED_DATA
 vdso_start:
 	.incbin "arch/x86/vdso/vdso.so"
 vdso_end:
+	.align PAGE_SIZE /* extra data here leaks to userspace. */
 
 .previous
 

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:x86/vdso] x86-64: Move the "user" vsyscall segment out of the data segment.
  2011-08-03 13:31 ` [PATCH v2 2/6] x86-64: Move the "user" vsyscall segment out of the data segment Andy Lutomirski
@ 2011-08-05  5:37   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Andy Lutomirski @ 2011-08-05  5:37 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, luto, a.miskiewicz, tglx, hpa, luto

Commit-ID:  9c40818da5b39fca236029059ab839857b1ef56c
Gitweb:     http://git.kernel.org/tip/9c40818da5b39fca236029059ab839857b1ef56c
Author:     Andy Lutomirski <luto@MIT.EDU>
AuthorDate: Wed, 3 Aug 2011 09:31:50 -0400
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Thu, 4 Aug 2011 16:13:35 -0700

x86-64: Move the "user" vsyscall segment out of the data segment.

The kernel's loader doesn't seem to care, but gold complains.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Link: http://lkml.kernel.org/r/f0716870c297242a841b949953d80c0d87bf3d3f.1312378163.git.luto@mit.edu
Reported-by: Arkadiusz Miskiewicz <a.miskiewicz@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
---
 arch/x86/kernel/vmlinux.lds.S |   36 ++++++++++++++++++------------------
 1 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 4aa9c54..e79fb39 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -154,6 +154,24 @@ SECTIONS
 
 #ifdef CONFIG_X86_64
 
+	. = ALIGN(PAGE_SIZE);
+	__vvar_page = .;
+
+	.vvar : AT(ADDR(.vvar) - LOAD_OFFSET) {
+
+	      /* Place all vvars at the offsets in asm/vvar.h. */
+#define EMIT_VVAR(name, offset) 		\
+		. = offset;		\
+		*(.vvar_ ## name)
+#define __VVAR_KERNEL_LDS
+#include <asm/vvar.h>
+#undef __VVAR_KERNEL_LDS
+#undef EMIT_VVAR
+
+	} :data
+
+       . = ALIGN(__vvar_page + PAGE_SIZE, PAGE_SIZE);
+
 #define VSYSCALL_ADDR (-10*1024*1024)
 
 #define VLOAD_OFFSET (VSYSCALL_ADDR - __vsyscall_0 + LOAD_OFFSET)
@@ -162,7 +180,6 @@ SECTIONS
 #define VVIRT_OFFSET (VSYSCALL_ADDR - __vsyscall_0)
 #define VVIRT(x) (ADDR(x) - VVIRT_OFFSET)
 
-	. = ALIGN(4096);
 	__vsyscall_0 = .;
 
 	. = VSYSCALL_ADDR;
@@ -185,23 +202,6 @@ SECTIONS
 #undef VVIRT_OFFSET
 #undef VVIRT
 
-	__vvar_page = .;
-
-	.vvar : AT(ADDR(.vvar) - LOAD_OFFSET) {
-
-	      /* Place all vvars at the offsets in asm/vvar.h. */
-#define EMIT_VVAR(name, offset) 		\
-		. = offset;		\
-		*(.vvar_ ## name)
-#define __VVAR_KERNEL_LDS
-#include <asm/vvar.h>
-#undef __VVAR_KERNEL_LDS
-#undef EMIT_VVAR
-
-	} :data
-
-       . = ALIGN(__vvar_page + PAGE_SIZE, PAGE_SIZE);
-
 #endif /* CONFIG_X86_64 */
 
 	/* Init code and data - will be freed after init */

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:x86/vdso] x86-64: Work around gold bug 13023
  2011-08-03 13:31 ` [PATCH v2 3/6] x86-64: Work around gold bug 13023 Andy Lutomirski
@ 2011-08-05  5:38   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Andy Lutomirski @ 2011-08-05  5:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, luto, a.miskiewicz, tglx, hpa, luto

Commit-ID:  f670bb760e7d32ec9c690e748a1d5d04921363ab
Gitweb:     http://git.kernel.org/tip/f670bb760e7d32ec9c690e748a1d5d04921363ab
Author:     Andy Lutomirski <luto@MIT.EDU>
AuthorDate: Wed, 3 Aug 2011 09:31:51 -0400
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Thu, 4 Aug 2011 16:13:38 -0700

x86-64: Work around gold bug 13023

Gold has trouble assigning numbers to the location counter inside of
an output section description.  The bug was triggered by
9fd67b4ed0714ab718f1f9bd14c344af336a6df7, which consolidated all of
the vsyscall sections into a single section.  The workaround is IMO
still nicer than the old way of doing it.

This produces an apparently valid kernel image and passes my vdso
tests on both GNU ld version 2.21.51.0.6-2.fc15 20110118 and GNU
gold (version 2.21.51.0.6-2.fc15 20110118) 1.10 as distributed by
Fedora 15.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Link: http://lkml.kernel.org/r/0b260cb806f1f9a25c00ce8377a5f035d57f557a.1312378163.git.luto@mit.edu
Reported-by: Arkadiusz Miskiewicz <a.miskiewicz@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
---
 arch/x86/kernel/vmlinux.lds.S |   16 ++++++++++------
 1 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index e79fb39..8f3a265 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -158,10 +158,12 @@ SECTIONS
 	__vvar_page = .;
 
 	.vvar : AT(ADDR(.vvar) - LOAD_OFFSET) {
+		/* work around gold bug 13023 */
+		__vvar_beginning_hack = .;
 
-	      /* Place all vvars at the offsets in asm/vvar.h. */
-#define EMIT_VVAR(name, offset) 		\
-		. = offset;		\
+		/* Place all vvars at the offsets in asm/vvar.h. */
+#define EMIT_VVAR(name, offset) 			\
+		. = __vvar_beginning_hack + offset;	\
 		*(.vvar_ ## name)
 #define __VVAR_KERNEL_LDS
 #include <asm/vvar.h>
@@ -184,15 +186,17 @@ SECTIONS
 
 	. = VSYSCALL_ADDR;
 	.vsyscall : AT(VLOAD(.vsyscall)) {
+		/* work around gold bug 13023 */
+		__vsyscall_beginning_hack = .;
 		*(.vsyscall_0)
 
-		. = 1024;
+		. = __vsyscall_beginning_hack + 1024;
 		*(.vsyscall_1)
 
-		. = 2048;
+		. = __vsyscall_beginning_hack + 2048;
 		*(.vsyscall_2)
 
-		. = 4096;  /* Pad the whole page. */
+		. = __vsyscall_beginning_hack + 4096;  /* Pad the whole page. */
 	} :user =0xcc
 	. = ALIGN(__vsyscall_0 + PAGE_SIZE, PAGE_SIZE);
 

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:x86/vdso] x86-64, xen: Enable the vvar mapping
  2011-08-03 13:31 ` [PATCH v2 4/6] x86-64/xen: Enable the vvar mapping Andy Lutomirski
  2011-08-03 13:49   ` Konrad Rzeszutek Wilk
@ 2011-08-05  5:38   ` tip-bot for Andy Lutomirski
  1 sibling, 0 replies; 20+ messages in thread
From: tip-bot for Andy Lutomirski @ 2011-08-05  5:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, konrad.wilk, luto, tglx, hpa, luto

Commit-ID:  5d5791af4c0d4fd32093882357506355c3357503
Gitweb:     http://git.kernel.org/tip/5d5791af4c0d4fd32093882357506355c3357503
Author:     Andy Lutomirski <luto@MIT.EDU>
AuthorDate: Wed, 3 Aug 2011 09:31:52 -0400
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Thu, 4 Aug 2011 16:13:47 -0700

x86-64, xen: Enable the vvar mapping

Xen needs to handle VVAR_PAGE, introduced in git commit:
9fd67b4ed0714ab718f1f9bd14c344af336a6df7
x86-64: Give vvars their own page

Otherwise we die during bootup with a message like:

(XEN) mm.c:940:d10 Error getting mfn 1888 (pfn 1e3e48) from L1 entry
      8000000001888465 for l1e_owner=10, pg_owner=10
(XEN) mm.c:5049:d10 ptwr_emulate: could not get_page_from_l1e()
[    0.000000] BUG: unable to handle kernel NULL pointer dereference at (null)
[    0.000000] IP: [<ffffffff8103a930>] xen_set_pte+0x20/0xe0

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Link: http://lkml.kernel.org/r/4659478ed2f3480938f96491c2ecbe2b2e113a23.1312378163.git.luto@mit.edu
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
---
 arch/x86/xen/mmu.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 0ccccb6..2e78619 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1829,6 +1829,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 # endif
 #else
 	case VSYSCALL_LAST_PAGE ... VSYSCALL_FIRST_PAGE:
+	case VVAR_PAGE:
 #endif
 	case FIX_TEXT_POKE0:
 	case FIX_TEXT_POKE1:
@@ -1869,7 +1870,8 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 #ifdef CONFIG_X86_64
 	/* Replicate changes to map the vsyscall page into the user
 	   pagetable vsyscall mapping. */
-	if (idx >= VSYSCALL_LAST_PAGE && idx <= VSYSCALL_FIRST_PAGE) {
+	if ((idx >= VSYSCALL_LAST_PAGE && idx <= VSYSCALL_FIRST_PAGE) ||
+	    idx == VVAR_PAGE) {
 		unsigned long vaddr = __fix_to_virt(idx);
 		set_pte_vaddr_pud(level3_user_vsyscall, vaddr, pte);
 	}

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:x86/vdso] x86-64: Add user_64bit_mode paravirt op
  2011-08-03 13:31 ` [PATCH v2 5/6] x86-64: Add user_64bit_mode paravirt op Andy Lutomirski
@ 2011-08-05  5:39   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Andy Lutomirski @ 2011-08-05  5:39 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, konrad.wilk, luto, tglx, hpa, luto

Commit-ID:  318f5a2a672152328c9fb4dead504b89ec738a43
Gitweb:     http://git.kernel.org/tip/318f5a2a672152328c9fb4dead504b89ec738a43
Author:     Andy Lutomirski <luto@MIT.EDU>
AuthorDate: Wed, 3 Aug 2011 09:31:53 -0400
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Thu, 4 Aug 2011 16:13:49 -0700

x86-64: Add user_64bit_mode paravirt op

Three places in the kernel assume that the only long mode CPL 3
selector is __USER_CS.  This is not true on Xen -- Xen's sysretq
changes cs to the magic value 0xe033.

Two of the places are corner cases, but as of "x86-64: Improve
vsyscall emulation CS and RIP handling"
(c9712944b2a12373cb6ff8059afcfb7e826a6c54), vsyscalls will segfault
if called with Xen's extra CS selector.  This causes a panic when
older init builds die.

It seems impossible to make Xen use __USER_CS reliably without
taking a performance hit on every system call, so this fixes the
tests instead with a new paravirt op.  It's a little ugly because
ptrace.h can't include paravirt.h.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Link: http://lkml.kernel.org/r/f4fcb3947340d9e96ce1054a432f183f9da9db83.1312378163.git.luto@mit.edu
Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
---
 arch/x86/include/asm/desc.h           |    4 ++--
 arch/x86/include/asm/paravirt_types.h |    6 ++++++
 arch/x86/include/asm/ptrace.h         |   19 +++++++++++++++++++
 arch/x86/kernel/paravirt.c            |    4 ++++
 arch/x86/kernel/step.c                |    2 +-
 arch/x86/kernel/vsyscall_64.c         |    6 +-----
 arch/x86/mm/fault.c                   |    2 +-
 arch/x86/xen/enlighten.c              |    4 ++++
 8 files changed, 38 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h
index 7b439d9..41935fa 100644
--- a/arch/x86/include/asm/desc.h
+++ b/arch/x86/include/asm/desc.h
@@ -27,8 +27,8 @@ static inline void fill_ldt(struct desc_struct *desc, const struct user_desc *in
 
 	desc->base2		= (info->base_addr & 0xff000000) >> 24;
 	/*
-	 * Don't allow setting of the lm bit. It is useless anyway
-	 * because 64bit system calls require __USER_CS:
+	 * Don't allow setting of the lm bit. It would confuse
+	 * user_64bit_mode and would get overridden by sysret anyway.
 	 */
 	desc->l			= 0;
 }
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 8288509..96a0f80 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -41,6 +41,7 @@
 
 #include <asm/desc_defs.h>
 #include <asm/kmap_types.h>
+#include <asm/pgtable_types.h>
 
 struct page;
 struct thread_struct;
@@ -63,6 +64,11 @@ struct paravirt_callee_save {
 struct pv_info {
 	unsigned int kernel_rpl;
 	int shared_kernel_pmd;
+
+#ifdef CONFIG_X86_64
+	u16 extra_user_64bit_cs;  /* __USER_CS if none */
+#endif
+
 	int paravirt_enabled;
 	const char *name;
 };
diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
index 94e7618..3566454 100644
--- a/arch/x86/include/asm/ptrace.h
+++ b/arch/x86/include/asm/ptrace.h
@@ -131,6 +131,9 @@ struct pt_regs {
 #ifdef __KERNEL__
 
 #include <linux/init.h>
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#endif
 
 struct cpuinfo_x86;
 struct task_struct;
@@ -187,6 +190,22 @@ static inline int v8086_mode(struct pt_regs *regs)
 #endif
 }
 
+#ifdef CONFIG_X86_64
+static inline bool user_64bit_mode(struct pt_regs *regs)
+{
+#ifndef CONFIG_PARAVIRT
+	/*
+	 * On non-paravirt systems, this is the only long mode CPL 3
+	 * selector.  We do not allow long mode selectors in the LDT.
+	 */
+	return regs->cs == __USER_CS;
+#else
+	/* Headers are too twisted for this to go in paravirt.h. */
+	return regs->cs == __USER_CS || regs->cs == pv_info.extra_user_64bit_cs;
+#endif
+}
+#endif
+
 /*
  * X86_32 CPUs don't save ss and esp if the CPU is already in kernel mode
  * when it traps.  The previous stack will be directly underneath the saved
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 869e1ae..681f159 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -299,6 +299,10 @@ struct pv_info pv_info = {
 	.paravirt_enabled = 0,
 	.kernel_rpl = 0,
 	.shared_kernel_pmd = 1,	/* Only used when CONFIG_X86_PAE is set */
+
+#ifdef CONFIG_X86_64
+	.extra_user_64bit_cs = __USER_CS,
+#endif
 };
 
 struct pv_init_ops pv_init_ops = {
diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c
index 7977f0c..c346d11 100644
--- a/arch/x86/kernel/step.c
+++ b/arch/x86/kernel/step.c
@@ -74,7 +74,7 @@ static int is_setting_trap_flag(struct task_struct *child, struct pt_regs *regs)
 
 #ifdef CONFIG_X86_64
 		case 0x40 ... 0x4f:
-			if (regs->cs != __USER_CS)
+			if (!user_64bit_mode(regs))
 				/* 32-bit mode: register increment */
 				return 0;
 			/* 64-bit mode: REX prefix */
diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
index dda7dff..1725930 100644
--- a/arch/x86/kernel/vsyscall_64.c
+++ b/arch/x86/kernel/vsyscall_64.c
@@ -127,11 +127,7 @@ void dotraplinkage do_emulate_vsyscall(struct pt_regs *regs, long error_code)
 
 	local_irq_enable();
 
-	/*
-	 * Real 64-bit user mode code has cs == __USER_CS.  Anything else
-	 * is bogus.
-	 */
-	if (regs->cs != __USER_CS) {
+	if (!user_64bit_mode(regs)) {
 		/*
 		 * If we trapped from kernel mode, we might as well OOPS now
 		 * instead of returning to some random address and OOPSing
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 2dbf6bf..c1d0182 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -105,7 +105,7 @@ check_prefetch_opcode(struct pt_regs *regs, unsigned char *instr,
 		 * but for now it's good enough to assume that long
 		 * mode only uses well known segments or kernel.
 		 */
-		return (!user_mode(regs)) || (regs->cs == __USER_CS);
+		return (!user_mode(regs) || user_64bit_mode(regs));
 #endif
 	case 0x60:
 		/* 0x64 thru 0x67 are valid prefixes in all modes. */
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 5525163..78fe33d 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -937,6 +937,10 @@ static const struct pv_info xen_info __initconst = {
 	.paravirt_enabled = 1,
 	.shared_kernel_pmd = 0,
 
+#ifdef CONFIG_X86_64
+	.extra_user_64bit_cs = FLAT_USER_CS64,
+#endif
+
 	.name = "Xen",
 };
 

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:x86/vdso] x86-64: Add vsyscall:emulate_vsyscall trace event
  2011-08-03 13:31 ` [PATCH v2 6/6] x86-64: Add vsyscall:emulate_vsyscall trace event Andy Lutomirski
@ 2011-08-05  5:39   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Andy Lutomirski @ 2011-08-05  5:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, luto, tglx, hpa, luto

Commit-ID:  c149a665ac488e0dac22a42287f45ad1bda06ff1
Gitweb:     http://git.kernel.org/tip/c149a665ac488e0dac22a42287f45ad1bda06ff1
Author:     Andy Lutomirski <luto@MIT.EDU>
AuthorDate: Wed, 3 Aug 2011 09:31:54 -0400
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Thu, 4 Aug 2011 16:13:53 -0700

x86-64: Add vsyscall:emulate_vsyscall trace event

Vsyscall emulation is slow, so make it easy to track down.

Signed-off-by: Andy Lutomirski <luto@mit.edu>
Link: http://lkml.kernel.org/r/cdaad7da946a80b200df16647c1700db3e1171e9.1312378163.git.luto@mit.edu
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
---
 arch/x86/kernel/vsyscall_64.c    |    6 ++++++
 arch/x86/kernel/vsyscall_trace.h |   29 +++++++++++++++++++++++++++++
 2 files changed, 35 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
index 1725930..93a0d46 100644
--- a/arch/x86/kernel/vsyscall_64.c
+++ b/arch/x86/kernel/vsyscall_64.c
@@ -50,6 +50,9 @@
 #include <asm/vgtod.h>
 #include <asm/traps.h>
 
+#define CREATE_TRACE_POINTS
+#include "vsyscall_trace.h"
+
 DEFINE_VVAR(int, vgetcpu_mode);
 DEFINE_VVAR(struct vsyscall_gtod_data, vsyscall_gtod_data) =
 {
@@ -146,6 +149,9 @@ void dotraplinkage do_emulate_vsyscall(struct pt_regs *regs, long error_code)
 	 * and int 0xcc is two bytes long.
 	 */
 	vsyscall_nr = addr_to_vsyscall_nr(regs->ip - 2);
+
+	trace_emulate_vsyscall(vsyscall_nr);
+
 	if (vsyscall_nr < 0) {
 		warn_bad_vsyscall(KERN_WARNING, regs,
 				  "illegal int 0xcc (exploit attempt?)");
diff --git a/arch/x86/kernel/vsyscall_trace.h b/arch/x86/kernel/vsyscall_trace.h
new file mode 100644
index 0000000..a8b2ede
--- /dev/null
+++ b/arch/x86/kernel/vsyscall_trace.h
@@ -0,0 +1,29 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM vsyscall
+
+#if !defined(__VSYSCALL_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define __VSYSCALL_TRACE_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(emulate_vsyscall,
+
+	    TP_PROTO(int nr),
+
+	    TP_ARGS(nr),
+
+	    TP_STRUCT__entry(__field(int, nr)),
+
+	    TP_fast_assign(
+			   __entry->nr = nr;
+			   ),
+
+	    TP_printk("nr = %d", __entry->nr)
+);
+
+#endif
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH ../../arch/x86/kernel
+#define TRACE_INCLUDE_FILE vsyscall_trace
+#include <trace/define_trace.h>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
  2011-08-03 13:59     ` Andrew Lutomirski
@ 2011-08-11 20:28       ` Jeremy Fitzhardinge
  2011-08-11 20:47         ` Andrew Lutomirski
  0 siblings, 1 reply; 20+ messages in thread
From: Jeremy Fitzhardinge @ 2011-08-11 20:28 UTC (permalink / raw)
  To: Andrew Lutomirski
  Cc: Konrad Rzeszutek Wilk, mingo, hpa, x86,
	Linux Kernel Mailing List, keir.xen, xen-devel, virtualization

On 08/03/2011 06:59 AM, Andrew Lutomirski wrote:
> On Wed, Aug 3, 2011 at 9:56 AM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Wed, Aug 03, 2011 at 09:53:22AM -0400, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Aug 03, 2011 at 09:31:48AM -0400, Andy Lutomirski wrote:
>>>> This fixes various problems that cropped up with the vdso patches.
>>>>
>>>>  - Patch 1 fixes an information leak to userspace.
>>>>  - Patches 2 and 3 fix the kernel build on gold.
>>>>  - Patches 4 and 5 fix Xen (I hope).
>>>>  - Patch 6 (optional) adds a trace event to vsyscall emulation.  It will
>>>>    make it easier to handle performance regression reports :)
>>> Hm, you seemed to have the x86 maintainers on your email..
>> I definitly need some coffee. What I meant was that you missing putting
>> the x86 maintainers on this patchset. They are the ones that will handle this.
>>
>> I put them on the To list for you.
> Are you sure about that coffee?  I'm pretty sure I had x86@kernel.org in there.

What's the state of this series?  Has tip.git picked it up, or does it
need another go-around?

I just booted a Linus tree and got a backtrace that looks like this
issue, though I haven't looked into it in detail yet.

Thanks,
    J

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
  2011-08-11 20:28       ` Jeremy Fitzhardinge
@ 2011-08-11 20:47         ` Andrew Lutomirski
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Lutomirski @ 2011-08-11 20:47 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Konrad Rzeszutek Wilk, mingo, hpa, x86,
	Linux Kernel Mailing List, keir.xen, xen-devel, virtualization

On Thu, Aug 11, 2011 at 4:28 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> On 08/03/2011 06:59 AM, Andrew Lutomirski wrote:
>> On Wed, Aug 3, 2011 at 9:56 AM, Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com> wrote:
>>> On Wed, Aug 03, 2011 at 09:53:22AM -0400, Konrad Rzeszutek Wilk wrote:
>>>> On Wed, Aug 03, 2011 at 09:31:48AM -0400, Andy Lutomirski wrote:
>>>>> This fixes various problems that cropped up with the vdso patches.
>>>>>
>>>>>  - Patch 1 fixes an information leak to userspace.
>>>>>  - Patches 2 and 3 fix the kernel build on gold.
>>>>>  - Patches 4 and 5 fix Xen (I hope).
>>>>>  - Patch 6 (optional) adds a trace event to vsyscall emulation.  It will
>>>>>    make it easier to handle performance regression reports :)
>>>> Hm, you seemed to have the x86 maintainers on your email..
>>> I definitly need some coffee. What I meant was that you missing putting
>>> the x86 maintainers on this patchset. They are the ones that will handle this.
>>>
>>> I put them on the To list for you.
>> Are you sure about that coffee?  I'm pretty sure I had x86@kernel.org in there.
>
> What's the state of this series?  Has tip.git picked it up, or does it
> need another go-around?
>
> I just booted a Linus tree and got a backtrace that looks like this
> issue, though I haven't looked into it in detail yet.

It's in tip/x86/vdso.  I don't know it's status on the way from there to -linus.

--Andy

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2011-08-11 20:47 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-03 13:31 [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Andy Lutomirski
2011-08-03 13:31 ` [PATCH v2 1/6] x86-64: Pad vDSO to a page boundary Andy Lutomirski
2011-08-05  5:37   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
2011-08-03 13:31 ` [PATCH v2 2/6] x86-64: Move the "user" vsyscall segment out of the data segment Andy Lutomirski
2011-08-05  5:37   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
2011-08-03 13:31 ` [PATCH v2 3/6] x86-64: Work around gold bug 13023 Andy Lutomirski
2011-08-05  5:38   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
2011-08-03 13:31 ` [PATCH v2 4/6] x86-64/xen: Enable the vvar mapping Andy Lutomirski
2011-08-03 13:49   ` Konrad Rzeszutek Wilk
2011-08-05  5:38   ` [tip:x86/vdso] x86-64, xen: " tip-bot for Andy Lutomirski
2011-08-03 13:31 ` [PATCH v2 5/6] x86-64: Add user_64bit_mode paravirt op Andy Lutomirski
2011-08-05  5:39   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
2011-08-03 13:31 ` [PATCH v2 6/6] x86-64: Add vsyscall:emulate_vsyscall trace event Andy Lutomirski
2011-08-05  5:39   ` [tip:x86/vdso] " tip-bot for Andy Lutomirski
2011-08-03 13:53 ` [PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1 Konrad Rzeszutek Wilk
2011-08-03 13:56   ` Konrad Rzeszutek Wilk
2011-08-03 13:59     ` Andrew Lutomirski
2011-08-11 20:28       ` Jeremy Fitzhardinge
2011-08-11 20:47         ` Andrew Lutomirski
2011-08-03 17:34 ` [Xen-devel] " Sander Eikelenboom

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).