From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966693AbbDVQkt (ORCPT ); Wed, 22 Apr 2015 12:40:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58922 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966409AbbDVQkp (ORCPT ); Wed, 22 Apr 2015 12:40:45 -0400 From: Denys Vlasenko To: Ingo Molnar Cc: Denys Vlasenko , Linus Torvalds , Steven Rostedt , Borislav Petkov , "H. Peter Anvin" , Andy Lutomirski , Oleg Nesterov , Frederic Weisbecker , Alexei Starovoitov , Will Drewry , Kees Cook , x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] x86/asm/entry/32: Explain stub32_clone logic Date: Wed, 22 Apr 2015 18:40:07 +0200 Message-Id: <1429720808-7173-1-git-send-email-dvlasenk@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The reason for copying of %r8 to %rcx is quite non-obvious. Add a comment which explains why it is done. Fix indentation and trailing whitespace while at it. Signed-off-by: Denys Vlasenko --- CC: Linus Torvalds CC: Steven Rostedt CC: Ingo Molnar CC: Borislav Petkov CC: "H. Peter Anvin" CC: Andy Lutomirski CC: Oleg Nesterov CC: Frederic Weisbecker CC: Alexei Starovoitov CC: Will Drewry CC: Kees Cook CC: x86@kernel.org CC: linux-kernel@vger.kernel.org arch/x86/ia32/ia32entry.S | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S index 2ca052e..8e72256 100644 --- a/arch/x86/ia32/ia32entry.S +++ b/arch/x86/ia32/ia32entry.S @@ -562,9 +562,17 @@ GLOBAL(\label) ALIGN GLOBAL(stub32_clone) - leaq sys_clone(%rip),%rax + leaq sys_clone(%rip), %rax + /* + * 32-bit clone API is clone(..., int tls_val, int *child_tidptr). + * 64-bit clone API is clone(..., int *child_tidptr, int tls_val). + * Native 64-bit kernel's sys_clone() implements the latter. + * We need to swap args here. But since tls_val is in fact ignored + * by sys_clone(), we can get away with an assignment + * (arg4 = arg5) instead of a full swap: + */ mov %r8, %rcx - jmp ia32_ptregs_common + jmp ia32_ptregs_common ALIGN ia32_ptregs_common: -- 1.8.1.4