linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2 0/6] x86: 32-bit compatible C/R on x86_64
@ 2016-06-29 10:57 Dmitry Safonov
  2016-06-29 10:57 ` [PATCHv2 1/6] x86/vdso: unmap vdso blob on vvar mapping failure Dmitry Safonov
                   ` (5 more replies)
  0 siblings, 6 replies; 21+ messages in thread
From: Dmitry Safonov @ 2016-06-29 10:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: 0x7f454c46, linux-mm, mingo, luto, gorcunov, xemul, oleg, Dmitry Safonov

The following changes are available since v1:
- killed PR_REG_SIZE macro as Oleg suggested
- cleared SA_IA32_ABI|SA_X32_ABI from oact->sa.sa_flags in do_sigaction()
  as noticed by Oleg
- moved SA_IA32_ABI|SA_X32_ABI from uapi header as those flags shouldn't
  be exposed to user-space

I also reworked CRIU's patches to work with this patches set, rather than
on first RFC that swapped TIF_IA32 with arch_prctl. By now it yet fails
~10% of 32-bit tests of CRIU's test suite called ZDTM.
The CRIU branch for this can be viewed on [6] and v3 patches to add
this functionality have been sent to maillist [7].

The patches set is based on [3] and while it's not yet applied -- it
may make kbuild test robot unhappy.

Description from v1 [5]:

This patches set is an attempt to add checkpoint/restore
for 32-bit tasks in compatibility mode on x86_64 hosts.

Restore in CRIU starts from one root restoring process, which
reads info for all threads being restored from images files.
This information is used further to find out which processes
share some resources. Later shared resources are restored only
by one process and all other inherit them.
After that it calls clone() and new threads restore their
properties in parallel. Those threads inherit all parent's
mappings and fetch properties from those mappings
(and do clone themself, if they have children/subthreads). [1]
Then starts restorer blob's play, it's PIE binary, which
unmaps all unneeded for restoring VMAs, maps new VMAs and
finalize restoring with sigreturn syscall. [2]

To restore of 32-bit task we need three things to do in running
x86_64 restorer blob:
a) set code selector to __USER32_CS (to run 32-bit code);
b) remap vdso blob from 64-bit to 32-bit
   This is primary needed because restore may happen on a different
   kernel, which has different vDSO image than we had on dump.
c) if 32-bit vDSO differ to dumped image, move it on free place
   and add jump trampolines to that place.
d) switch TIF_IA32 flag, so kernel would know that it deals with
   compatible 32-bit application.

>From all this:
a) setting CS may be done from userspace, no patches needed;
b) patches 1-3 add ability to map different vDSO blobs on x86 kernel;
c) for remapping/moving 32-bit vDSO blob patches have been send earlier
   and seems to be accepted [3]
d) and for swapping TIF_IA32 flag discussion with Andy ended in conclusion
   that it's better to remove this flag completely.
   Patches 4-6 deletes usage of TIF_IA32 from ptrace, signal and coredump
   code. This is rework/resend of RFC [4]

[1] https://criu.org/Checkpoint/Restore#Restore
[2] https://criu.org/Restorer_context
[3] https://lkml.org/lkml/2016/6/28/489
[4] https://lkml.org/lkml/2016/4/25/650
[5] https://lkml.org/lkml/2016/6/1/425
[6] https://github.com/0x7f454c46/criu/tree/compat-4
[7] https://lists.openvz.org/pipermail/criu/2016-June/029788.html

Dmitry Safonov (6):
  x86/vdso: unmap vdso blob on vvar mapping failure
  x86/vdso: introduce do_map_vdso() and vdso_type enum
  x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  x86/coredump: use pr_reg size, rather that TIF_IA32 flag
  x86/ptrace: down with test_thread_flag(TIF_IA32)
  x86/signal: add SA_{X32,IA32}_ABI sa_flags

 arch/x86/entry/vdso/vma.c         | 72 ++++++++++++++++++++++-----------------
 arch/x86/ia32/ia32_signal.c       |  2 +-
 arch/x86/include/asm/compat.h     |  8 ++---
 arch/x86/include/asm/fpu/signal.h |  6 ++++
 arch/x86/include/asm/signal.h     |  4 +++
 arch/x86/include/asm/vdso.h       |  4 +++
 arch/x86/include/uapi/asm/prctl.h |  6 ++++
 arch/x86/kernel/process_64.c      | 10 ++++++
 arch/x86/kernel/ptrace.c          |  2 +-
 arch/x86/kernel/signal.c          | 20 ++++++-----
 arch/x86/kernel/signal_compat.c   | 34 ++++++++++++++++--
 fs/binfmt_elf.c                   | 23 +++++--------
 kernel/signal.c                   |  7 ++++
 13 files changed, 133 insertions(+), 65 deletions(-)

-- 
2.9.0

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCHv2 1/6] x86/vdso: unmap vdso blob on vvar mapping failure
  2016-06-29 10:57 [PATCHv2 0/6] x86: 32-bit compatible C/R on x86_64 Dmitry Safonov
@ 2016-06-29 10:57 ` Dmitry Safonov
  2016-07-06 14:16   ` Andy Lutomirski
  2016-06-29 10:57 ` [PATCHv2 2/6] x86/vdso: introduce do_map_vdso() and vdso_type enum Dmitry Safonov
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: Dmitry Safonov @ 2016-06-29 10:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: 0x7f454c46, linux-mm, mingo, luto, gorcunov, xemul, oleg,
	Dmitry Safonov, Andy Lutomirski, Thomas Gleixner, H. Peter Anvin,
	x86

If remapping of vDSO blob failed on vvar mapping,
we need to unmap previously mapped vDSO blob.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: x86@kernel.org
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
---
 arch/x86/entry/vdso/vma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 3329844e3c43..387028e6755d 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -238,7 +238,7 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr)
 
 	if (IS_ERR(vma)) {
 		ret = PTR_ERR(vma);
-		goto up_fail;
+		do_munmap(mm, text_start, image->size);
 	}
 
 up_fail:
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCHv2 2/6] x86/vdso: introduce do_map_vdso() and vdso_type enum
  2016-06-29 10:57 [PATCHv2 0/6] x86: 32-bit compatible C/R on x86_64 Dmitry Safonov
  2016-06-29 10:57 ` [PATCHv2 1/6] x86/vdso: unmap vdso blob on vvar mapping failure Dmitry Safonov
@ 2016-06-29 10:57 ` Dmitry Safonov
  2016-07-06 14:21   ` Andy Lutomirski
  2016-06-29 10:57 ` [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_* Dmitry Safonov
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: Dmitry Safonov @ 2016-06-29 10:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: 0x7f454c46, linux-mm, mingo, luto, gorcunov, xemul, oleg,
	Dmitry Safonov, Andy Lutomirski, Thomas Gleixner, H. Peter Anvin,
	x86

Make in-kernel API to map vDSO blobs on x86.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: x86@kernel.org
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
---
 arch/x86/entry/vdso/vma.c   | 70 +++++++++++++++++++++++++--------------------
 arch/x86/include/asm/vdso.h |  4 +++
 2 files changed, 43 insertions(+), 31 deletions(-)

diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 387028e6755d..4017b60eed33 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -176,11 +176,18 @@ static int vvar_fault(const struct vm_special_mapping *sm,
 	return VM_FAULT_SIGBUS;
 }
 
-static int map_vdso(const struct vdso_image *image, bool calculate_addr)
+/*
+ * Add vdso and vvar mappings to current process.
+ * @image          - blob to map
+ * @addr           - request a specific address (zero to map at free addr)
+ * @calculate_addr - turn on aslr (@addr will be ignored)
+ */
+static int map_vdso(const struct vdso_image *image,
+		unsigned long addr, bool calculate_addr)
 {
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
-	unsigned long addr, text_start;
+	unsigned long text_start;
 	int ret = 0;
 
 	static const struct vm_special_mapping vdso_mapping = {
@@ -193,12 +200,9 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr)
 		.fault = vvar_fault,
 	};
 
-	if (calculate_addr) {
+	if (calculate_addr)
 		addr = vdso_addr(current->mm->start_stack,
 				 image->size - image->sym_vvar_start);
-	} else {
-		addr = 0;
-	}
 
 	if (down_write_killable(&mm->mmap_sem))
 		return -EINTR;
@@ -249,48 +253,52 @@ up_fail:
 	return ret;
 }
 
-#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
-static int load_vdso32(void)
+int do_map_vdso(vdso_type type, unsigned long addr, bool randomize_addr)
 {
-	if (vdso32_enabled != 1)  /* Other values all mean "disabled" */
-		return 0;
-
-	return map_vdso(&vdso_image_32, false);
-}
+	switch (type) {
+#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
+	case VDSO_32:
+		if (vdso32_enabled != 1)  /* Other values all mean "disabled" */
+			return 0;
+		/* vDSO aslr turned off for i386 vDSO */
+		return map_vdso(&vdso_image_32, addr, false);
+#endif
+#ifdef CONFIG_X86_64
+	case VDSO_64:
+		if (!vdso64_enabled)
+			return 0;
+		return map_vdso(&vdso_image_64, addr, randomize_addr);
+#endif
+#ifdef CONFIG_X86_X32_ABI
+	case VDSO_X32:
+		if (!vdso64_enabled)
+			return 0;
+		return map_vdso(&vdso_image_x32, addr, randomize_addr);
 #endif
+	default:
+		return -EINVAL;
+	}
+}
 
 #ifdef CONFIG_X86_64
 int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 {
-	if (!vdso64_enabled)
-		return 0;
-
-	return map_vdso(&vdso_image_64, true);
+	return do_map_vdso(VDSO_64, 0, true);
 }
 
 #ifdef CONFIG_COMPAT
 int compat_arch_setup_additional_pages(struct linux_binprm *bprm,
 				       int uses_interp)
 {
-#ifdef CONFIG_X86_X32_ABI
-	if (test_thread_flag(TIF_X32)) {
-		if (!vdso64_enabled)
-			return 0;
-
-		return map_vdso(&vdso_image_x32, true);
-	}
-#endif
-#ifdef CONFIG_IA32_EMULATION
-	return load_vdso32();
-#else
-	return 0;
-#endif
+	if (test_thread_flag(TIF_X32))
+		return do_map_vdso(VDSO_X32, 0, true);
+	return do_map_vdso(VDSO_32, 0, false);
 }
 #endif
 #else
 int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 {
-	return load_vdso32();
+	return do_map_vdso(VDSO_32, 0, false);
 }
 #endif
 
diff --git a/arch/x86/include/asm/vdso.h b/arch/x86/include/asm/vdso.h
index 43dc55be524e..2be137897842 100644
--- a/arch/x86/include/asm/vdso.h
+++ b/arch/x86/include/asm/vdso.h
@@ -41,6 +41,10 @@ extern const struct vdso_image vdso_image_32;
 
 extern void __init init_vdso_image(const struct vdso_image *image);
 
+typedef enum { VDSO_32, VDSO_64, VDSO_X32 } vdso_type;
+
+extern int do_map_vdso(vdso_type type, unsigned long addr, bool randomize_addr);
+
 #endif /* __ASSEMBLER__ */
 
 #endif /* _ASM_X86_VDSO_H */
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-06-29 10:57 [PATCHv2 0/6] x86: 32-bit compatible C/R on x86_64 Dmitry Safonov
  2016-06-29 10:57 ` [PATCHv2 1/6] x86/vdso: unmap vdso blob on vvar mapping failure Dmitry Safonov
  2016-06-29 10:57 ` [PATCHv2 2/6] x86/vdso: introduce do_map_vdso() and vdso_type enum Dmitry Safonov
@ 2016-06-29 10:57 ` Dmitry Safonov
  2016-07-06 14:30   ` Andy Lutomirski
  2016-06-29 10:57 ` [PATCHv2 4/6] x86/coredump: use pr_reg size, rather that TIF_IA32 flag Dmitry Safonov
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: Dmitry Safonov @ 2016-06-29 10:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: 0x7f454c46, linux-mm, mingo, luto, gorcunov, xemul, oleg,
	Dmitry Safonov, Andy Lutomirski, Thomas Gleixner, H. Peter Anvin,
	x86

Add API to change vdso blob type with arch_prctl.
As this is usefull only by needs of CRIU, expose
this interface under CONFIG_CHECKPOINT_RESTORE.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: x86@kernel.org
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
---
 arch/x86/include/uapi/asm/prctl.h |  6 ++++++
 arch/x86/kernel/process_64.c      | 10 ++++++++++
 2 files changed, 16 insertions(+)

diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h
index 3ac5032fae09..ae135de547f5 100644
--- a/arch/x86/include/uapi/asm/prctl.h
+++ b/arch/x86/include/uapi/asm/prctl.h
@@ -6,4 +6,10 @@
 #define ARCH_GET_FS 0x1003
 #define ARCH_GET_GS 0x1004
 
+#ifdef CONFIG_CHECKPOINT_RESTORE
+# define ARCH_MAP_VDSO_X32	0x2001
+# define ARCH_MAP_VDSO_32	0x2002
+# define ARCH_MAP_VDSO_64	0x2003
+#endif
+
 #endif /* _ASM_X86_PRCTL_H */
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 6e789ca1f841..64459c88b3d9 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -49,6 +49,7 @@
 #include <asm/debugreg.h>
 #include <asm/switch_to.h>
 #include <asm/xen/hypervisor.h>
+#include <asm/vdso.h>
 
 asmlinkage extern void ret_from_fork(void);
 
@@ -577,6 +578,15 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		break;
 	}
 
+#ifdef CONFIG_CHECKPOINT_RESTORE
+	case ARCH_MAP_VDSO_X32:
+		return do_map_vdso(VDSO_X32, addr, false);
+	case ARCH_MAP_VDSO_32:
+		return do_map_vdso(VDSO_32, addr, false);
+	case ARCH_MAP_VDSO_64:
+		return do_map_vdso(VDSO_64, addr, false);
+#endif
+
 	default:
 		ret = -EINVAL;
 		break;
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCHv2 4/6] x86/coredump: use pr_reg size, rather that TIF_IA32 flag
  2016-06-29 10:57 [PATCHv2 0/6] x86: 32-bit compatible C/R on x86_64 Dmitry Safonov
                   ` (2 preceding siblings ...)
  2016-06-29 10:57 ` [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_* Dmitry Safonov
@ 2016-06-29 10:57 ` Dmitry Safonov
  2016-06-29 10:57 ` [PATCHv2 5/6] x86/ptrace: down with test_thread_flag(TIF_IA32) Dmitry Safonov
  2016-06-29 10:57 ` [PATCHv2 6/6] x86/signal: add SA_{X32,IA32}_ABI sa_flags Dmitry Safonov
  5 siblings, 0 replies; 21+ messages in thread
From: Dmitry Safonov @ 2016-06-29 10:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: 0x7f454c46, linux-mm, mingo, luto, gorcunov, xemul, oleg,
	Dmitry Safonov, Andy Lutomirski, x86, Alexander Viro,
	linux-fsdevel

Killed PR_REG_SIZE and PR_REG_PTR macro as we can get regset size
from regset view.

Suggested-by: Oleg Nesterov <oleg@redhat.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: x86@kernel.org
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
---
 arch/x86/include/asm/compat.h |  8 ++++----
 fs/binfmt_elf.c               | 23 ++++++++---------------
 2 files changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/compat.h b/arch/x86/include/asm/compat.h
index 5a3b2c119ed0..4b039bd297ac 100644
--- a/arch/x86/include/asm/compat.h
+++ b/arch/x86/include/asm/compat.h
@@ -264,10 +264,10 @@ struct compat_shmid64_ds {
 #ifdef CONFIG_X86_X32_ABI
 typedef struct user_regs_struct compat_elf_gregset_t;
 
-#define PR_REG_SIZE(S) (test_thread_flag(TIF_IA32) ? 68 : 216)
-#define PRSTATUS_SIZE(S) (test_thread_flag(TIF_IA32) ? 144 : 296)
-#define SET_PR_FPVALID(S,V) \
-  do { *(int *) (((void *) &((S)->pr_reg)) + PR_REG_SIZE(0)) = (V); } \
+/* Full regset -- prstatus on x32, otherwise on ia32 */
+#define PRSTATUS_SIZE(S, R) (R != sizeof(S.pr_reg) ? 144 : 296)
+#define SET_PR_FPVALID(S, V, R) \
+  do { *(int *) (((void *) &((S)->pr_reg)) + R) = (V); } \
   while (0)
 
 #define COMPAT_USE_64BIT_TIME \
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index a7a28110dc80..8fd6cf9083d0 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -1622,20 +1622,12 @@ static void do_thread_regset_writeback(struct task_struct *task,
 		regset->writeback(task, regset, 1);
 }
 
-#ifndef PR_REG_SIZE
-#define PR_REG_SIZE(S) sizeof(S)
-#endif
-
 #ifndef PRSTATUS_SIZE
-#define PRSTATUS_SIZE(S) sizeof(S)
-#endif
-
-#ifndef PR_REG_PTR
-#define PR_REG_PTR(S) (&((S)->pr_reg))
+#define PRSTATUS_SIZE(S, R) sizeof(S)
 #endif
 
 #ifndef SET_PR_FPVALID
-#define SET_PR_FPVALID(S, V) ((S)->pr_fpvalid = (V))
+#define SET_PR_FPVALID(S, V, R) ((S)->pr_fpvalid = (V))
 #endif
 
 static int fill_thread_core_info(struct elf_thread_core_info *t,
@@ -1643,6 +1635,7 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
 				 long signr, size_t *total)
 {
 	unsigned int i;
+	unsigned int regset_size = view->regsets[0].n * view->regsets[0].size;
 
 	/*
 	 * NT_PRSTATUS is the one special case, because the regset data
@@ -1651,12 +1644,11 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
 	 * We assume that regset 0 is NT_PRSTATUS.
 	 */
 	fill_prstatus(&t->prstatus, t->task, signr);
-	(void) view->regsets[0].get(t->task, &view->regsets[0],
-				    0, PR_REG_SIZE(t->prstatus.pr_reg),
-				    PR_REG_PTR(&t->prstatus), NULL);
+	(void) view->regsets[0].get(t->task, &view->regsets[0], 0, regset_size,
+				    &t->prstatus.pr_reg, NULL);
 
 	fill_note(&t->notes[0], "CORE", NT_PRSTATUS,
-		  PRSTATUS_SIZE(t->prstatus), &t->prstatus);
+		  PRSTATUS_SIZE(t->prstatus, regset_size), &t->prstatus);
 	*total += notesize(&t->notes[0]);
 
 	do_thread_regset_writeback(t->task, &view->regsets[0]);
@@ -1686,7 +1678,8 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
 						  regset->core_note_type,
 						  size, data);
 				else {
-					SET_PR_FPVALID(&t->prstatus, 1);
+					SET_PR_FPVALID(&t->prstatus,
+							1, regset_size);
 					fill_note(&t->notes[i], "CORE",
 						  NT_PRFPREG, size, data);
 				}
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCHv2 5/6] x86/ptrace: down with test_thread_flag(TIF_IA32)
  2016-06-29 10:57 [PATCHv2 0/6] x86: 32-bit compatible C/R on x86_64 Dmitry Safonov
                   ` (3 preceding siblings ...)
  2016-06-29 10:57 ` [PATCHv2 4/6] x86/coredump: use pr_reg size, rather that TIF_IA32 flag Dmitry Safonov
@ 2016-06-29 10:57 ` Dmitry Safonov
  2016-07-06 14:32   ` Andy Lutomirski
  2016-06-29 10:57 ` [PATCHv2 6/6] x86/signal: add SA_{X32,IA32}_ABI sa_flags Dmitry Safonov
  5 siblings, 1 reply; 21+ messages in thread
From: Dmitry Safonov @ 2016-06-29 10:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: 0x7f454c46, linux-mm, mingo, luto, gorcunov, xemul, oleg,
	Dmitry Safonov, Andy Lutomirski, x86

As the task isn't executing at the moment of {GET,SET}REGS,
return regset that corresponds to code selector, rather than
value of TIF_IA32 flag.
I.e. if we ptrace i386 elf binary that has just changed it's
code selector to __USER_CS, than GET_REGS will return
full x86_64 register set.

Note, that this will work only if application has changed it's CS.
If the application does 32-bit syscall with __USER_CS, ptrace
will still return 64-bit register set. Which might be still confusing
for tools that expect TS_COMPACT to be exposed [1, 2].

So this this change should make PTRACE_GETREGSET more reliable and
this will be another step to drop TIF_{IA32,X32} flags.

[1]: https://sourceforge.net/p/strace/mailman/message/30471411/
[2]: https://lkml.org/lkml/2012/1/18/320

Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: x86@kernel.org
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
---
 arch/x86/kernel/ptrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index 600edd225e81..a2612d06cf4b 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -1355,7 +1355,7 @@ void update_regset_xstate_info(unsigned int size, u64 xstate_mask)
 const struct user_regset_view *task_user_regset_view(struct task_struct *task)
 {
 #ifdef CONFIG_IA32_EMULATION
-	if (test_tsk_thread_flag(task, TIF_IA32))
+	if (!user_64bit_mode(task_pt_regs(task)))
 #endif
 #if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION
 		return &user_x86_32_view;
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCHv2 6/6] x86/signal: add SA_{X32,IA32}_ABI sa_flags
  2016-06-29 10:57 [PATCHv2 0/6] x86: 32-bit compatible C/R on x86_64 Dmitry Safonov
                   ` (4 preceding siblings ...)
  2016-06-29 10:57 ` [PATCHv2 5/6] x86/ptrace: down with test_thread_flag(TIF_IA32) Dmitry Safonov
@ 2016-06-29 10:57 ` Dmitry Safonov
  2016-07-06 14:36   ` Andy Lutomirski
  5 siblings, 1 reply; 21+ messages in thread
From: Dmitry Safonov @ 2016-06-29 10:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: 0x7f454c46, linux-mm, mingo, luto, gorcunov, xemul, oleg,
	Dmitry Safonov, Andy Lutomirski, x86

Introduce new flags that defines which ABI to use on creating sigframe.
Those flags kernel will set according to sigaction syscall ABI,
which set handler for the signal being delivered.

So that will drop the dependency on TIF_IA32/TIF_X32 flags on signal deliver.
Those flags will be used only under CONFIG_COMPAT.

Similar way ARM uses sa_flags to differ in which mode deliver signal
for 26-bit applications (look at SA_THIRYTWO).

Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: x86@kernel.org
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
---
 arch/x86/ia32/ia32_signal.c       |  2 +-
 arch/x86/include/asm/fpu/signal.h |  6 ++++++
 arch/x86/include/asm/signal.h     |  4 ++++
 arch/x86/kernel/signal.c          | 20 +++++++++++---------
 arch/x86/kernel/signal_compat.c   | 34 +++++++++++++++++++++++++++++++---
 kernel/signal.c                   |  7 +++++++
 6 files changed, 60 insertions(+), 13 deletions(-)

diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
index 2f29f4e407c3..cb13c0564ea7 100644
--- a/arch/x86/ia32/ia32_signal.c
+++ b/arch/x86/ia32/ia32_signal.c
@@ -378,7 +378,7 @@ int ia32_setup_rt_frame(int sig, struct ksignal *ksig,
 		put_user_ex(*((u64 *)&code), (u64 __user *)frame->retcode);
 	} put_user_catch(err);
 
-	err |= copy_siginfo_to_user32(&frame->info, &ksig->info);
+	err |= __copy_siginfo_to_user32(&frame->info, &ksig->info, false);
 	err |= ia32_setup_sigcontext(&frame->uc.uc_mcontext, fpstate,
 				     regs, set->sig[0]);
 	err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
diff --git a/arch/x86/include/asm/fpu/signal.h b/arch/x86/include/asm/fpu/signal.h
index 0e970d00dfcd..20a1fbf7fe4e 100644
--- a/arch/x86/include/asm/fpu/signal.h
+++ b/arch/x86/include/asm/fpu/signal.h
@@ -19,6 +19,12 @@ int ia32_setup_frame(int sig, struct ksignal *ksig,
 # define ia32_setup_rt_frame	__setup_rt_frame
 #endif
 
+#ifdef CONFIG_COMPAT
+int __copy_siginfo_to_user32(compat_siginfo_t __user *to,
+		const siginfo_t *from, bool x32_ABI);
+#endif
+
+
 extern void convert_from_fxsr(struct user_i387_ia32_struct *env,
 			      struct task_struct *tsk);
 extern void convert_to_fxsr(struct task_struct *tsk,
diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h
index 2138c9ae19ee..72c346f201b2 100644
--- a/arch/x86/include/asm/signal.h
+++ b/arch/x86/include/asm/signal.h
@@ -23,6 +23,10 @@ typedef struct {
 	unsigned long sig[_NSIG_WORDS];
 } sigset_t;
 
+/* non-uapi in-kernel SA_FLAGS for those indicates ABI for a signal frame */
+#define SA_IA32_ABI	0x02000000u
+#define SA_X32_ABI	0x01000000u
+
 #ifndef CONFIG_COMPAT
 typedef sigset_t compat_sigset_t;
 #endif
diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
index 22cc2f9f8aec..fdd819b5599e 100644
--- a/arch/x86/kernel/signal.c
+++ b/arch/x86/kernel/signal.c
@@ -42,6 +42,7 @@
 #include <asm/syscalls.h>
 
 #include <asm/sigframe.h>
+#include <asm/signal.h>
 
 #define COPY(x)			do {			\
 	get_user_ex(regs->x, &sc->x);			\
@@ -547,7 +548,7 @@ static int x32_setup_rt_frame(struct ksignal *ksig,
 		return -EFAULT;
 
 	if (ksig->ka.sa.sa_flags & SA_SIGINFO) {
-		if (copy_siginfo_to_user32(&frame->info, &ksig->info))
+		if (__copy_siginfo_to_user32(&frame->info, &ksig->info, true))
 			return -EFAULT;
 	}
 
@@ -660,20 +661,21 @@ badframe:
 	return 0;
 }
 
-static inline int is_ia32_compat_frame(void)
+static inline int is_ia32_compat_frame(struct ksignal *ksig)
 {
 	return config_enabled(CONFIG_IA32_EMULATION) &&
-	       test_thread_flag(TIF_IA32);
+		ksig->ka.sa.sa_flags & SA_IA32_ABI;
 }
 
-static inline int is_ia32_frame(void)
+static inline int is_ia32_frame(struct ksignal *ksig)
 {
-	return config_enabled(CONFIG_X86_32) || is_ia32_compat_frame();
+	return config_enabled(CONFIG_X86_32) || is_ia32_compat_frame(ksig);
 }
 
-static inline int is_x32_frame(void)
+static inline int is_x32_frame(struct ksignal *ksig)
 {
-	return config_enabled(CONFIG_X86_X32_ABI) && test_thread_flag(TIF_X32);
+	return config_enabled(CONFIG_X86_X32_ABI) &&
+		ksig->ka.sa.sa_flags & SA_X32_ABI;
 }
 
 static int
@@ -684,12 +686,12 @@ setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs)
 	compat_sigset_t *cset = (compat_sigset_t *) set;
 
 	/* Set up the stack frame */
-	if (is_ia32_frame()) {
+	if (is_ia32_frame(ksig)) {
 		if (ksig->ka.sa.sa_flags & SA_SIGINFO)
 			return ia32_setup_rt_frame(usig, ksig, cset, regs);
 		else
 			return ia32_setup_frame(usig, ksig, cset, regs);
-	} else if (is_x32_frame()) {
+	} else if (is_x32_frame(ksig)) {
 		return x32_setup_rt_frame(ksig, cset, regs);
 	} else {
 		return __setup_rt_frame(ksig->sig, ksig, set, regs);
diff --git a/arch/x86/kernel/signal_compat.c b/arch/x86/kernel/signal_compat.c
index dc3c0b1c816f..94561fda4d89 100644
--- a/arch/x86/kernel/signal_compat.c
+++ b/arch/x86/kernel/signal_compat.c
@@ -1,10 +1,32 @@
 #include <linux/compat.h>
 #include <linux/uaccess.h>
+#include <linux/ptrace.h>
 
-int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from)
+void sigaction_compat_abi(struct k_sigaction *act, struct k_sigaction *oact)
+{
+	/* Don't leak in-kernel non-uapi flags to user-space */
+	if (oact)
+		oact->sa.sa_flags &= ~(SA_IA32_ABI | SA_X32_ABI);
+
+	if (!act)
+		return;
+
+	/* Don't let flags to be set from userspace */
+	act->sa.sa_flags &= ~(SA_IA32_ABI | SA_X32_ABI);
+
+	if (user_64bit_mode(current_pt_regs()))
+		return;
+
+	if (in_ia32_syscall())
+		act->sa.sa_flags |= SA_IA32_ABI;
+	if (in_x32_syscall())
+		act->sa.sa_flags |= SA_X32_ABI;
+}
+
+int __copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from,
+		bool x32_ABI)
 {
 	int err = 0;
-	bool ia32 = test_thread_flag(TIF_IA32);
 
 	if (!access_ok(VERIFY_WRITE, to, sizeof(compat_siginfo_t)))
 		return -EFAULT;
@@ -38,7 +60,7 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from)
 				put_user_ex(from->si_arch, &to->si_arch);
 				break;
 			case __SI_CHLD >> 16:
-				if (ia32) {
+				if (!x32_ABI) {
 					put_user_ex(from->si_utime, &to->si_utime);
 					put_user_ex(from->si_stime, &to->si_stime);
 				} else {
@@ -72,6 +94,12 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from)
 	return err;
 }
 
+/* from syscall's path, where we know the ABI */
+int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from)
+{
+	return __copy_siginfo_to_user32(to, from, in_x32_syscall());
+}
+
 int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
 {
 	int err = 0;
diff --git a/kernel/signal.c b/kernel/signal.c
index 96e9bc40667f..c47632a9f148 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -3048,6 +3048,11 @@ void kernel_sigaction(int sig, __sighandler_t action)
 }
 EXPORT_SYMBOL(kernel_sigaction);
 
+void __weak sigaction_compat_abi(struct k_sigaction *act,
+		struct k_sigaction *oact)
+{
+}
+
 int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact)
 {
 	struct task_struct *p = current, *t;
@@ -3063,6 +3068,8 @@ int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact)
 	if (oact)
 		*oact = *k;
 
+	sigaction_compat_abi(act, oact);
+
 	if (act) {
 		sigdelsetmask(&act->sa.sa_mask,
 			      sigmask(SIGKILL) | sigmask(SIGSTOP));
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 1/6] x86/vdso: unmap vdso blob on vvar mapping failure
  2016-06-29 10:57 ` [PATCHv2 1/6] x86/vdso: unmap vdso blob on vvar mapping failure Dmitry Safonov
@ 2016-07-06 14:16   ` Andy Lutomirski
  0 siblings, 0 replies; 21+ messages in thread
From: Andy Lutomirski @ 2016-07-06 14:16 UTC (permalink / raw)
  To: Dmitry Safonov
  Cc: linux-kernel, Dmitry Safonov, linux-mm, Ingo Molnar,
	Cyrill Gorcunov, xemul, Oleg Nesterov, Andy Lutomirski,
	Thomas Gleixner, H. Peter Anvin, X86 ML

On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
> If remapping of vDSO blob failed on vvar mapping,
> we need to unmap previously mapped vDSO blob.

Acked-by: Andy Lutomirski <luto@kernel.org>

Although you should probably also update the failure code to clear out
context.vdso_image a few lines down.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 2/6] x86/vdso: introduce do_map_vdso() and vdso_type enum
  2016-06-29 10:57 ` [PATCHv2 2/6] x86/vdso: introduce do_map_vdso() and vdso_type enum Dmitry Safonov
@ 2016-07-06 14:21   ` Andy Lutomirski
  2016-07-07 11:04     ` Dmitry Safonov
  0 siblings, 1 reply; 21+ messages in thread
From: Andy Lutomirski @ 2016-07-06 14:21 UTC (permalink / raw)
  To: Dmitry Safonov
  Cc: linux-kernel, Dmitry Safonov, linux-mm, Ingo Molnar,
	Cyrill Gorcunov, xemul, Oleg Nesterov, Andy Lutomirski,
	Thomas Gleixner, H. Peter Anvin, X86 ML

On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
> Make in-kernel API to map vDSO blobs on x86.

I think the addr calculation was already confusing and is now even
worse.  How about simplifying it?  Get rid of calculate_addr entirely
and push the vdso_addr calls into arch_setup_additional_pages, etc.
Then just use addr directly in the map_vdso code.

> +int do_map_vdso(vdso_type type, unsigned long addr, bool randomize_addr)
>  {
> -       if (vdso32_enabled != 1)  /* Other values all mean "disabled" */
> -               return 0;
> -
> -       return map_vdso(&vdso_image_32, false);
> -}
> +       switch (type) {
> +#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
> +       case VDSO_32:
> +               if (vdso32_enabled != 1)  /* Other values all mean "disabled" */
> +                       return 0;
> +               /* vDSO aslr turned off for i386 vDSO */
> +               return map_vdso(&vdso_image_32, addr, false);
> +#endif
> +#ifdef CONFIG_X86_64
> +       case VDSO_64:
> +               if (!vdso64_enabled)
> +                       return 0;
> +               return map_vdso(&vdso_image_64, addr, randomize_addr);
> +#endif
> +#ifdef CONFIG_X86_X32_ABI
> +       case VDSO_X32:
> +               if (!vdso64_enabled)
> +                       return 0;
> +               return map_vdso(&vdso_image_x32, addr, randomize_addr);
>  #endif
> +       default:
> +               return -EINVAL;
> +       }
> +}

Why is this better than just passing the vdso_image pointer in?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-06-29 10:57 ` [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_* Dmitry Safonov
@ 2016-07-06 14:30   ` Andy Lutomirski
  2016-07-07 11:11     ` Dmitry Safonov
  0 siblings, 1 reply; 21+ messages in thread
From: Andy Lutomirski @ 2016-07-06 14:30 UTC (permalink / raw)
  To: Dmitry Safonov, Michal Hocko, Vladimir Davydov
  Cc: linux-kernel, Dmitry Safonov, linux-mm, Ingo Molnar,
	Cyrill Gorcunov, xemul, Oleg Nesterov, Andy Lutomirski,
	Thomas Gleixner, H. Peter Anvin, X86 ML

On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
> Add API to change vdso blob type with arch_prctl.
> As this is usefull only by needs of CRIU, expose
> this interface under CONFIG_CHECKPOINT_RESTORE.

> +#ifdef CONFIG_CHECKPOINT_RESTORE
> +       case ARCH_MAP_VDSO_X32:
> +               return do_map_vdso(VDSO_X32, addr, false);
> +       case ARCH_MAP_VDSO_32:
> +               return do_map_vdso(VDSO_32, addr, false);
> +       case ARCH_MAP_VDSO_64:
> +               return do_map_vdso(VDSO_64, addr, false);
> +#endif
> +

This will have an odd side effect: if the old mapping is still around,
its .fault will start behaving erratically.  I wonder if we can either
reliably zap the old vma (or check that it's not there any more)
before mapping a new one or whether we can associate the vdso image
with the vma (possibly by having a separate vm_special_mapping for
each vdso_image.  The latter is quite easy: change vdso_image to embed
vm_special_mapping and use container_of in vdso_fault to fish the
vdso_image back out.  But we'd have to embed another
vm_special_mapping for the vvar mapping as well for the same reason.

I'm also a bit concerned that __install_special_mapping might not get
all the cgroup and rlimit stuff right.  If we ensure that any old
mappings are gone, then the damage is bounded, but otherwise someone
might call this in a loop and fill their address space with arbitrary
numbers of special mappings.

--Andy

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 5/6] x86/ptrace: down with test_thread_flag(TIF_IA32)
  2016-06-29 10:57 ` [PATCHv2 5/6] x86/ptrace: down with test_thread_flag(TIF_IA32) Dmitry Safonov
@ 2016-07-06 14:32   ` Andy Lutomirski
  0 siblings, 0 replies; 21+ messages in thread
From: Andy Lutomirski @ 2016-07-06 14:32 UTC (permalink / raw)
  To: Dmitry Safonov, Pedro Alves
  Cc: linux-kernel, Dmitry Safonov, linux-mm, Ingo Molnar,
	Cyrill Gorcunov, xemul, Oleg Nesterov, Andy Lutomirski, X86 ML

On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
> As the task isn't executing at the moment of {GET,SET}REGS,
> return regset that corresponds to code selector, rather than
> value of TIF_IA32 flag.
> I.e. if we ptrace i386 elf binary that has just changed it's
> code selector to __USER_CS, than GET_REGS will return
> full x86_64 register set.

Pedro, I think this will cause gdb to be a little less broken than it
is now.  Am I right?  Will this break anything?

--Andy

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 6/6] x86/signal: add SA_{X32,IA32}_ABI sa_flags
  2016-06-29 10:57 ` [PATCHv2 6/6] x86/signal: add SA_{X32,IA32}_ABI sa_flags Dmitry Safonov
@ 2016-07-06 14:36   ` Andy Lutomirski
  0 siblings, 0 replies; 21+ messages in thread
From: Andy Lutomirski @ 2016-07-06 14:36 UTC (permalink / raw)
  To: Dmitry Safonov
  Cc: linux-kernel, Dmitry Safonov, linux-mm, Ingo Molnar,
	Cyrill Gorcunov, xemul, Oleg Nesterov, Andy Lutomirski, X86 ML

On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
> Introduce new flags that defines which ABI to use on creating sigframe.
> Those flags kernel will set according to sigaction syscall ABI,
> which set handler for the signal being delivered.
>
> So that will drop the dependency on TIF_IA32/TIF_X32 flags on signal deliver.
> Those flags will be used only under CONFIG_COMPAT.
>
> Similar way ARM uses sa_flags to differ in which mode deliver signal
> for 26-bit applications (look at SA_THIRYTWO).

Reviewed-by: Andy Lutomirski <luto@kernel.org>

>
> Cc: Oleg Nesterov <oleg@redhat.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Cyrill Gorcunov <gorcunov@openvz.org>
> Cc: Pavel Emelyanov <xemul@virtuozzo.com>
> Cc: x86@kernel.org
> Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 2/6] x86/vdso: introduce do_map_vdso() and vdso_type enum
  2016-07-06 14:21   ` Andy Lutomirski
@ 2016-07-07 11:04     ` Dmitry Safonov
  0 siblings, 0 replies; 21+ messages in thread
From: Dmitry Safonov @ 2016-07-07 11:04 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, Dmitry Safonov, linux-mm, Ingo Molnar,
	Cyrill Gorcunov, xemul, Oleg Nesterov, Andy Lutomirski,
	Thomas Gleixner, H. Peter Anvin, X86 ML

On 07/06/2016 05:21 PM, Andy Lutomirski wrote:
> On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
>> Make in-kernel API to map vDSO blobs on x86.
>
> I think the addr calculation was already confusing and is now even
> worse.  How about simplifying it?  Get rid of calculate_addr entirely
> and push the vdso_addr calls into arch_setup_additional_pages, etc.
> Then just use addr directly in the map_vdso code.

Thanks, will do.

>> +int do_map_vdso(vdso_type type, unsigned long addr, bool randomize_addr)
>>  {
>> -       if (vdso32_enabled != 1)  /* Other values all mean "disabled" */
>> -               return 0;
>> -
>> -       return map_vdso(&vdso_image_32, false);
>> -}
>> +       switch (type) {
>> +#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
>> +       case VDSO_32:
>> +               if (vdso32_enabled != 1)  /* Other values all mean "disabled" */
>> +                       return 0;
>> +               /* vDSO aslr turned off for i386 vDSO */
>> +               return map_vdso(&vdso_image_32, addr, false);
>> +#endif
>> +#ifdef CONFIG_X86_64
>> +       case VDSO_64:
>> +               if (!vdso64_enabled)
>> +                       return 0;
>> +               return map_vdso(&vdso_image_64, addr, randomize_addr);
>> +#endif
>> +#ifdef CONFIG_X86_X32_ABI
>> +       case VDSO_X32:
>> +               if (!vdso64_enabled)
>> +                       return 0;
>> +               return map_vdso(&vdso_image_x32, addr, randomize_addr);
>>  #endif
>> +       default:
>> +               return -EINVAL;
>> +       }
>> +}
>
> Why is this better than just passing the vdso_image pointer in?

Hmm, then all callers should be under the same ifdefs as vdso_image
blobs. Ok, will do.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-07-06 14:30   ` Andy Lutomirski
@ 2016-07-07 11:11     ` Dmitry Safonov
  2016-07-10 12:44       ` Andy Lutomirski
  0 siblings, 1 reply; 21+ messages in thread
From: Dmitry Safonov @ 2016-07-07 11:11 UTC (permalink / raw)
  To: Andy Lutomirski, Michal Hocko, Vladimir Davydov
  Cc: linux-kernel, Dmitry Safonov, linux-mm, Ingo Molnar,
	Cyrill Gorcunov, xemul, Oleg Nesterov, Andy Lutomirski,
	Thomas Gleixner, H. Peter Anvin, X86 ML

On 07/06/2016 05:30 PM, Andy Lutomirski wrote:
> On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
>> Add API to change vdso blob type with arch_prctl.
>> As this is usefull only by needs of CRIU, expose
>> this interface under CONFIG_CHECKPOINT_RESTORE.
>
>> +#ifdef CONFIG_CHECKPOINT_RESTORE
>> +       case ARCH_MAP_VDSO_X32:
>> +               return do_map_vdso(VDSO_X32, addr, false);
>> +       case ARCH_MAP_VDSO_32:
>> +               return do_map_vdso(VDSO_32, addr, false);
>> +       case ARCH_MAP_VDSO_64:
>> +               return do_map_vdso(VDSO_64, addr, false);
>> +#endif
>> +
>
> This will have an odd side effect: if the old mapping is still around,
> its .fault will start behaving erratically.  I wonder if we can either
> reliably zap the old vma (or check that it's not there any more)
> before mapping a new one or whether we can associate the vdso image
> with the vma (possibly by having a separate vm_special_mapping for
> each vdso_image.  The latter is quite easy: change vdso_image to embed
> vm_special_mapping and use container_of in vdso_fault to fish the
> vdso_image back out.  But we'd have to embed another
> vm_special_mapping for the vvar mapping as well for the same reason.
>
> I'm also a bit concerned that __install_special_mapping might not get
> all the cgroup and rlimit stuff right.  If we ensure that any old
> mappings are gone, then the damage is bounded, but otherwise someone
> might call this in a loop and fill their address space with arbitrary
> numbers of special mappings.

Well, I have deleted code that unmaps old vdso because I didn't saw
a reason why it's bad and wanted to reduce code. But well, now I do see
reasons, thanks.

Hmm, what do you think if I do it a little different way then embedding
vm_special_mapping: just that old hack with vma_ops. If I add a close()
hook there and make there context.vdso = NULL pointer, then I can test
it on remap. This can also have nice feature as restricting partial
munmap of vdso blob. Is this sounds sane?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-07-07 11:11     ` Dmitry Safonov
@ 2016-07-10 12:44       ` Andy Lutomirski
  2016-07-11 18:26         ` Oleg Nesterov
  0 siblings, 1 reply; 21+ messages in thread
From: Andy Lutomirski @ 2016-07-10 12:44 UTC (permalink / raw)
  To: Dmitry Safonov, Oleg Nesterov
  Cc: Michal Hocko, Vladimir Davydov, linux-kernel, Dmitry Safonov,
	linux-mm, Ingo Molnar, Cyrill Gorcunov, xemul, Andy Lutomirski,
	Thomas Gleixner, H. Peter Anvin, X86 ML

On Thu, Jul 7, 2016 at 4:11 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
> On 07/06/2016 05:30 PM, Andy Lutomirski wrote:
>>
>> On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com>
>> wrote:
>>>
>>> Add API to change vdso blob type with arch_prctl.
>>> As this is usefull only by needs of CRIU, expose
>>> this interface under CONFIG_CHECKPOINT_RESTORE.
>>
>>
>>> +#ifdef CONFIG_CHECKPOINT_RESTORE
>>> +       case ARCH_MAP_VDSO_X32:
>>> +               return do_map_vdso(VDSO_X32, addr, false);
>>> +       case ARCH_MAP_VDSO_32:
>>> +               return do_map_vdso(VDSO_32, addr, false);
>>> +       case ARCH_MAP_VDSO_64:
>>> +               return do_map_vdso(VDSO_64, addr, false);
>>> +#endif
>>> +
>>
>>
>> This will have an odd side effect: if the old mapping is still around,
>> its .fault will start behaving erratically.  I wonder if we can either
>> reliably zap the old vma (or check that it's not there any more)
>> before mapping a new one or whether we can associate the vdso image
>> with the vma (possibly by having a separate vm_special_mapping for
>> each vdso_image.  The latter is quite easy: change vdso_image to embed
>> vm_special_mapping and use container_of in vdso_fault to fish the
>> vdso_image back out.  But we'd have to embed another
>> vm_special_mapping for the vvar mapping as well for the same reason.
>>
>> I'm also a bit concerned that __install_special_mapping might not get
>> all the cgroup and rlimit stuff right.  If we ensure that any old
>> mappings are gone, then the damage is bounded, but otherwise someone
>> might call this in a loop and fill their address space with arbitrary
>> numbers of special mappings.
>
>
> Well, I have deleted code that unmaps old vdso because I didn't saw
> a reason why it's bad and wanted to reduce code. But well, now I do see
> reasons, thanks.
>
> Hmm, what do you think if I do it a little different way then embedding
> vm_special_mapping: just that old hack with vma_ops. If I add a close()
> hook there and make there context.vdso = NULL pointer, then I can test
> it on remap. This can also have nice feature as restricting partial
> munmap of vdso blob. Is this sounds sane?

I think so, as long as you do something to make sure that vvar gets
unmapped as well.

Oleg, want to sanity-check us?  Do you believe that if .mremap ensures
that only entire vma can be remapped and .close ensures that only the
whole vma can be unmapped, are we okay?  Or will we have issues with
mprotect?


-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-07-10 12:44       ` Andy Lutomirski
@ 2016-07-11 18:26         ` Oleg Nesterov
  2016-07-11 18:28           ` Andy Lutomirski
  0 siblings, 1 reply; 21+ messages in thread
From: Oleg Nesterov @ 2016-07-11 18:26 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Dmitry Safonov, Michal Hocko, Vladimir Davydov, linux-kernel,
	Dmitry Safonov, linux-mm, Ingo Molnar, Cyrill Gorcunov, xemul,
	Andy Lutomirski, Thomas Gleixner, H. Peter Anvin, X86 ML

On 07/10, Andy Lutomirski wrote:
>
> On Thu, Jul 7, 2016 at 4:11 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
> > On 07/06/2016 05:30 PM, Andy Lutomirski wrote:
> >>
> >> On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com>
> >> wrote:
> >>>
> >>> Add API to change vdso blob type with arch_prctl.
> >>> As this is usefull only by needs of CRIU, expose
> >>> this interface under CONFIG_CHECKPOINT_RESTORE.
> >>
> >>
> >>> +#ifdef CONFIG_CHECKPOINT_RESTORE
> >>> +       case ARCH_MAP_VDSO_X32:
> >>> +               return do_map_vdso(VDSO_X32, addr, false);
> >>> +       case ARCH_MAP_VDSO_32:
> >>> +               return do_map_vdso(VDSO_32, addr, false);
> >>> +       case ARCH_MAP_VDSO_64:
> >>> +               return do_map_vdso(VDSO_64, addr, false);
> >>> +#endif
> >>> +
> >>
> >>
> >> This will have an odd side effect: if the old mapping is still around,
> >> its .fault will start behaving erratically.

Yes but I am not sure I fully understand your concerns, so let me ask...

Do we really care? I mean, the kernel can't crash or something like this,
just the old vdso mapping can faultin the "wrong" page from the new
vdso_image, right?

The user of prctl(ARCH_MAP_VDSO) should understand what it does and unmap
the old vdso anyway.

> >> I wonder if we can either
> >> reliably zap the old vma (or check that it's not there any more)
> >> before mapping a new one

However, I think this is right anyway, please see below...

> >> or whether we can associate the vdso image
> >> with the vma (possibly by having a separate vm_special_mapping for
> >> each vdso_image.

Yes, I too thought it would be nice to do this, regardless.

But as you said we probably want to limit the numbet of special mappings
an application can create:

> >> I'm also a bit concerned that __install_special_mapping might not get
> >> all the cgroup and rlimit stuff right.  If we ensure that any old
> >> mappings are gone, then the damage is bounded, but otherwise someone
> >> might call this in a loop and fill their address space with arbitrary
> >> numbers of special mappings.

I think you are right, we should not allow user-space to abuse the special
mappings. Even if iiuc in this case only RLIMIT_AS does matter...

> Oleg, want to sanity-check us?  Do you believe that if .mremap ensures
> that only entire vma can be remapped

Yes I think this makes sense. And damn we should kill arch_remap() ;)

> and .close ensures that only the
> whole vma can be unmapped,

How? It can't return the error.

And do_munmap() doesn't necessarily call ->close(),

> Or will we have issues with
> mprotect?

Yes, __split_vma() doesn't call ->close() too. ->open() can't help...

So it seems that we should do this by hand somehow. But in fact, what
I actually think right now is that I am totally confused and got lost ;)

Oleg.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-07-11 18:26         ` Oleg Nesterov
@ 2016-07-11 18:28           ` Andy Lutomirski
  2016-07-12 14:14             ` Oleg Nesterov
  0 siblings, 1 reply; 21+ messages in thread
From: Andy Lutomirski @ 2016-07-11 18:28 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Dmitry Safonov, Michal Hocko, Vladimir Davydov, linux-kernel,
	Dmitry Safonov, linux-mm, Ingo Molnar, Cyrill Gorcunov, xemul,
	Andy Lutomirski, Thomas Gleixner, H. Peter Anvin, X86 ML

On Mon, Jul 11, 2016 at 11:26 AM, Oleg Nesterov <oleg@redhat.com> wrote:
> On 07/10, Andy Lutomirski wrote:
>>
>> On Thu, Jul 7, 2016 at 4:11 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
>> > On 07/06/2016 05:30 PM, Andy Lutomirski wrote:
>> >>
>> >> On Wed, Jun 29, 2016 at 3:57 AM, Dmitry Safonov <dsafonov@virtuozzo.com>
>> >> wrote:
>> >>>
>> >>> Add API to change vdso blob type with arch_prctl.
>> >>> As this is usefull only by needs of CRIU, expose
>> >>> this interface under CONFIG_CHECKPOINT_RESTORE.
>> >>
>> >>
>> >>> +#ifdef CONFIG_CHECKPOINT_RESTORE
>> >>> +       case ARCH_MAP_VDSO_X32:
>> >>> +               return do_map_vdso(VDSO_X32, addr, false);
>> >>> +       case ARCH_MAP_VDSO_32:
>> >>> +               return do_map_vdso(VDSO_32, addr, false);
>> >>> +       case ARCH_MAP_VDSO_64:
>> >>> +               return do_map_vdso(VDSO_64, addr, false);
>> >>> +#endif
>> >>> +
>> >>
>> >>
>> >> This will have an odd side effect: if the old mapping is still around,
>> >> its .fault will start behaving erratically.
>
> Yes but I am not sure I fully understand your concerns, so let me ask...
>
> Do we really care? I mean, the kernel can't crash or something like this,
> just the old vdso mapping can faultin the "wrong" page from the new
> vdso_image, right?

That makes me nervous.  IMO a mapping should have well-defined
semantics.  If nothing else, could be really messy if the list of
pages were wrong.

My real concern is DoS: I doubt that __install_special_mapping gets
all the accounting right.

>
> The user of prctl(ARCH_MAP_VDSO) should understand what it does and unmap
> the old vdso anyway.
>
>> >> I wonder if we can either
>> >> reliably zap the old vma (or check that it's not there any more)
>> >> before mapping a new one
>
> However, I think this is right anyway, please see below...
>
>> >> or whether we can associate the vdso image
>> >> with the vma (possibly by having a separate vm_special_mapping for
>> >> each vdso_image.
>
> Yes, I too thought it would be nice to do this, regardless.
>
> But as you said we probably want to limit the numbet of special mappings
> an application can create:
>
>> >> I'm also a bit concerned that __install_special_mapping might not get
>> >> all the cgroup and rlimit stuff right.  If we ensure that any old
>> >> mappings are gone, then the damage is bounded, but otherwise someone
>> >> might call this in a loop and fill their address space with arbitrary
>> >> numbers of special mappings.
>
> I think you are right, we should not allow user-space to abuse the special
> mappings. Even if iiuc in this case only RLIMIT_AS does matter...
>
>> Oleg, want to sanity-check us?  Do you believe that if .mremap ensures
>> that only entire vma can be remapped
>
> Yes I think this makes sense. And damn we should kill arch_remap() ;)
>
>> and .close ensures that only the
>> whole vma can be unmapped,
>
> How? It can't return the error.
>
> And do_munmap() doesn't necessarily call ->close(),
>
>> Or will we have issues with
>> mprotect?
>
> Yes, __split_vma() doesn't call ->close() too. ->open() can't help...
>
> So it seems that we should do this by hand somehow. But in fact, what
> I actually think right now is that I am totally confused and got lost ;)

I'm starting to wonder if we should finally suck it up and give
special mappings a non-NULL vm_file so we can track them properly.
Oleg, weren't you thinking of doing that for some other reason?

--Andy

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-07-11 18:28           ` Andy Lutomirski
@ 2016-07-12 14:14             ` Oleg Nesterov
  2016-08-02 10:59               ` Dmitry Safonov
  0 siblings, 1 reply; 21+ messages in thread
From: Oleg Nesterov @ 2016-07-12 14:14 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Dmitry Safonov, Michal Hocko, Vladimir Davydov, linux-kernel,
	Dmitry Safonov, linux-mm, Ingo Molnar, Cyrill Gorcunov, xemul,
	Andy Lutomirski, Thomas Gleixner, H. Peter Anvin, X86 ML

On 07/11, Andy Lutomirski wrote:
>
> On Mon, Jul 11, 2016 at 11:26 AM, Oleg Nesterov <oleg@redhat.com> wrote:
> >
> > Do we really care? I mean, the kernel can't crash or something like this,
> > just the old vdso mapping can faultin the "wrong" page from the new
> > vdso_image, right?
>
> That makes me nervous.  IMO a mapping should have well-defined
> semantics.

Perhaps. but map_vdso() will be special anyway, it also changes ->vdso.

For example, if a 32-bit application calls prctl(ARCH_MAP_VDSO) from a
signal handler and we unmap the old vdso mapping, it will crash later
trying to call the (unmapped) restorer == kernel_rt_sigreturn.

> If nothing else, could be really messy if the list of
> pages were wrong.

I do not see anything really wrong, but I can easily miss something.

And don't get me wrong, I agree that any cleanup (say, associate vdso
image with vma) makes sense.

> My real concern is DoS: I doubt that __install_special_mapping gets
> all the accounting right.

Yes, and if it was not clear I fully agree. Even if we forget about the
accounting, I feel that special mappings must not be abused by userspace.

> > So it seems that we should do this by hand somehow. But in fact, what
> > I actually think right now is that I am totally confused and got lost ;)
>
> I'm starting to wonder if we should finally suck it up and give
> special mappings a non-NULL vm_file so we can track them properly.
> Oleg, weren't you thinking of doing that for some other reason?

Yes, uprobes. Currently we can't probe vdso page(s).

Oleg.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-07-12 14:14             ` Oleg Nesterov
@ 2016-08-02 10:59               ` Dmitry Safonov
  2016-08-10  8:35                 ` Andy Lutomirski
  0 siblings, 1 reply; 21+ messages in thread
From: Dmitry Safonov @ 2016-08-02 10:59 UTC (permalink / raw)
  To: Oleg Nesterov, Andy Lutomirski
  Cc: Michal Hocko, Vladimir Davydov, linux-kernel, Dmitry Safonov,
	linux-mm, Ingo Molnar, Cyrill Gorcunov, xemul, Andy Lutomirski,
	Thomas Gleixner, H. Peter Anvin, X86 ML

On 07/12/2016 05:14 PM, Oleg Nesterov wrote:
> On 07/11, Andy Lutomirski wrote:
>> I'm starting to wonder if we should finally suck it up and give
>> special mappings a non-NULL vm_file so we can track them properly.
>> Oleg, weren't you thinking of doing that for some other reason?
>
> Yes, uprobes. Currently we can't probe vdso page(s).

So, to make sure, that I've understood correctly, I need to:
o add vm_file to vdso/vvar vmas, __install_special_mapping will init
   them;
o place array pages[] inside f_mapping;
o create f_inode for each file -- for this we need some mount point, so
   I'll create something like vdsofs, register this filesystem and mount
   it in initcall (or like do_basic_setup - as it's done by shmem, i.e).

Is this the idea, or I got it wrong?

And maybe the idea is to create fake vm_file for just reference
counting, but do not treat/init it like file with inode, etc?
So with fake file I can also check if vdso is mapped already, but
I'm sure the fake file will not help Oleg with uprobes.

-- 
              Dmitry

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-08-02 10:59               ` Dmitry Safonov
@ 2016-08-10  8:35                 ` Andy Lutomirski
  2016-08-10 10:49                   ` Dmitry Safonov
  0 siblings, 1 reply; 21+ messages in thread
From: Andy Lutomirski @ 2016-08-10  8:35 UTC (permalink / raw)
  To: Dmitry Safonov
  Cc: Oleg Nesterov, Michal Hocko, Vladimir Davydov, linux-kernel,
	Dmitry Safonov, linux-mm, Ingo Molnar, Cyrill Gorcunov, xemul,
	Andy Lutomirski, Thomas Gleixner, H. Peter Anvin, X86 ML

On Tue, Aug 2, 2016 at 3:59 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
> On 07/12/2016 05:14 PM, Oleg Nesterov wrote:
>>
>> On 07/11, Andy Lutomirski wrote:
>>>
>>> I'm starting to wonder if we should finally suck it up and give
>>> special mappings a non-NULL vm_file so we can track them properly.
>>> Oleg, weren't you thinking of doing that for some other reason?
>>
>>
>> Yes, uprobes. Currently we can't probe vdso page(s).
>
>
> So, to make sure, that I've understood correctly, I need to:
> o add vm_file to vdso/vvar vmas, __install_special_mapping will init
>   them;
> o place array pages[] inside f_mapping;
> o create f_inode for each file -- for this we need some mount point, so
>   I'll create something like vdsofs, register this filesystem and mount
>   it in initcall (or like do_basic_setup - as it's done by shmem, i.e).
>
> Is this the idea, or I got it wrong?
>
> And maybe the idea is to create fake vm_file for just reference
> counting, but do not treat/init it like file with inode, etc?
> So with fake file I can also check if vdso is mapped already, but
> I'm sure the fake file will not help Oleg with uprobes.

This would work, but it might be complicated.  I'm not an expert at mm
internals.

Another approach would be to just iterate over all vmas and look for
old copies of the special mapping.

--Andy

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_*
  2016-08-10  8:35                 ` Andy Lutomirski
@ 2016-08-10 10:49                   ` Dmitry Safonov
  0 siblings, 0 replies; 21+ messages in thread
From: Dmitry Safonov @ 2016-08-10 10:49 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Oleg Nesterov, Michal Hocko, Vladimir Davydov, linux-kernel,
	Dmitry Safonov, linux-mm, Ingo Molnar, Cyrill Gorcunov, xemul,
	Andy Lutomirski, Thomas Gleixner, H. Peter Anvin, X86 ML

On 08/10/2016 11:35 AM, Andy Lutomirski wrote:
> On Tue, Aug 2, 2016 at 3:59 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
>> On 07/12/2016 05:14 PM, Oleg Nesterov wrote:
>>>
>>> On 07/11, Andy Lutomirski wrote:
>>>>
>>>> I'm starting to wonder if we should finally suck it up and give
>>>> special mappings a non-NULL vm_file so we can track them properly.
>>>> Oleg, weren't you thinking of doing that for some other reason?
>>>
>>>
>>> Yes, uprobes. Currently we can't probe vdso page(s).
>>
>>
>> So, to make sure, that I've understood correctly, I need to:
>> o add vm_file to vdso/vvar vmas, __install_special_mapping will init
>>   them;
>> o place array pages[] inside f_mapping;
>> o create f_inode for each file -- for this we need some mount point, so
>>   I'll create something like vdsofs, register this filesystem and mount
>>   it in initcall (or like do_basic_setup - as it's done by shmem, i.e).
>>
>> Is this the idea, or I got it wrong?
>>
>> And maybe the idea is to create fake vm_file for just reference
>> counting, but do not treat/init it like file with inode, etc?
>> So with fake file I can also check if vdso is mapped already, but
>> I'm sure the fake file will not help Oleg with uprobes.
>
> This would work, but it might be complicated.  I'm not an expert at mm
> internals.

Ok, thanks on answer!
I'll try to prepare patches in near days -- they will help to track
vdso and for uprobes. If this will become too complex, I'll just
iterate over vmas.

>
> Another approach would be to just iterate over all vmas and look for
> old copies of the special mapping.

Thanks,
              Dmitry

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-08-10 20:27 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-29 10:57 [PATCHv2 0/6] x86: 32-bit compatible C/R on x86_64 Dmitry Safonov
2016-06-29 10:57 ` [PATCHv2 1/6] x86/vdso: unmap vdso blob on vvar mapping failure Dmitry Safonov
2016-07-06 14:16   ` Andy Lutomirski
2016-06-29 10:57 ` [PATCHv2 2/6] x86/vdso: introduce do_map_vdso() and vdso_type enum Dmitry Safonov
2016-07-06 14:21   ` Andy Lutomirski
2016-07-07 11:04     ` Dmitry Safonov
2016-06-29 10:57 ` [PATCHv2 3/6] x86/arch_prctl/vdso: add ARCH_MAP_VDSO_* Dmitry Safonov
2016-07-06 14:30   ` Andy Lutomirski
2016-07-07 11:11     ` Dmitry Safonov
2016-07-10 12:44       ` Andy Lutomirski
2016-07-11 18:26         ` Oleg Nesterov
2016-07-11 18:28           ` Andy Lutomirski
2016-07-12 14:14             ` Oleg Nesterov
2016-08-02 10:59               ` Dmitry Safonov
2016-08-10  8:35                 ` Andy Lutomirski
2016-08-10 10:49                   ` Dmitry Safonov
2016-06-29 10:57 ` [PATCHv2 4/6] x86/coredump: use pr_reg size, rather that TIF_IA32 flag Dmitry Safonov
2016-06-29 10:57 ` [PATCHv2 5/6] x86/ptrace: down with test_thread_flag(TIF_IA32) Dmitry Safonov
2016-07-06 14:32   ` Andy Lutomirski
2016-06-29 10:57 ` [PATCHv2 6/6] x86/signal: add SA_{X32,IA32}_ABI sa_flags Dmitry Safonov
2016-07-06 14:36   ` Andy Lutomirski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).