From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: ACJfBouJ2AeLEvR0yNNJ9S1aGmXs+uLMkJgNDEvrje1RMexyrcLZfCR8a1XWqe8dVVhYtxC0q8NZ ARC-Seal: i=1; a=rsa-sha256; t=1516292145; cv=none; d=google.com; s=arc-20160816; b=WSoo4knU0Yr6MiEvBELM9BG6f2oncorFIjs0GKzk4NL28Gxfx2o1LYKT59jOKUwR4q fK29GayRd5/Xk4LmEijqe6E7R134D7qHEY1kbzf63qrDwD89MtTSZwDM/jgCSrPxXZ06 QuU4d/70Vnxzae5T8fGLjHKb3aXXMk5w2R8/o5OUtry6CeGOZWPeWA0gvQt2WqmgZpjd VZDHbzKEUp3tchwpIXP6zPKbdanDGYldkBXJBD90orCgMN7iZSPwWexco1s6QqcqUMLP ivJPipBuJ7XUuLC6A4KBB3dUAv0a/NglMJhA8TNNW346fF1GSY4TFK+uFOy7lyBcx714 kzKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dmarc-filter :arc-authentication-results; bh=t8M4Y5d6DTFhYRrh9s1WFZDIyPPNMud7ME5yK7dJk9g=; b=tgtL9Bc+L+Nk4cPbjSp1EORLvbB7/hX3XYeinyuBaaLo3LUfXFNWjJOIL2giQGpdvo 8szsCuTMQzflhFP8VAhXRRIu0Iqyhulg//1HoSKL2N7725LG0Zq7mmdC2Mld+8gvR2Ac pC36HwWfnIHmsnUxc1gM5kVeJr/WU+SPK0xNnCJwJD32sN0YgWSt4nyRJG2QZuLFTJou K8rBl+h2V7+N2nPt6baBP112ycwA7+dl63FjypIDhAWpciDHQ/R5cDXpsiVN47TLoBGi x8qEKYPgXTuo6ILnBfO8iHW4wSxA2vLXVaIa6FYvvkfhWjhs3inGIw+0QatrFH4vphy6 xkkw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of mhiramat@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=mhiramat@kernel.org Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of mhiramat@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=mhiramat@kernel.org DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D9BF821747 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=mhiramat@kernel.org From: Masami Hiramatsu To: Ingo Molnar , David Woodhouse Cc: Masami Hiramatsu , linux-kernel@vger.kernel.org, Andi Kleen , Greg Kroah-Hartman , Arjan van de Ven , Peter Zijlstra , Ananth N Mavinakayanahalli , Thomas Gleixner , "H . Peter Anvin" Subject: [PATCH v2 tip/master 3/3] kprobes/x86: Disable optimizing on the function jumps to indirect thunk Date: Fri, 19 Jan 2018 01:15:20 +0900 Message-Id: <151629212062.10241.6991266100233002273.stgit@devbox> X-Mailer: git-send-email 2.13.6 In-Reply-To: <151629203720.10241.17490679760505352230.stgit@devbox> References: <151629203720.10241.17490679760505352230.stgit@devbox> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1589931632406825364?= X-GMAIL-MSGID: =?utf-8?q?1589947551978206349?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: Since indirect jump instructions will be replaced by jump to __x86_indirect_thunk_*, those jmp instruction must be treated as an indirect jump. Since optprobe prohibits to optimize probes in the function which uses an indirect jump, it also needs to find out the function which jump to __x86_indirect_thunk_* and disable optimization. This adds a check that the jump target address is between the __indirect_thunk_start/end when optimizing kprobe. Signed-off-by: Masami Hiramatsu --- arch/x86/kernel/kprobes/opt.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index e941136e24d8..203d398802a3 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -40,6 +40,7 @@ #include #include #include +#include #include "common.h" @@ -203,7 +204,7 @@ static int copy_optimized_instructions(u8 *dest, u8 *src, u8 *real) } /* Check whether insn is indirect jump */ -static int insn_is_indirect_jump(struct insn *insn) +static int __insn_is_indirect_jump(struct insn *insn) { return ((insn->opcode.bytes[0] == 0xff && (X86_MODRM_REG(insn->modrm.value) & 6) == 4) || /* Jump */ @@ -237,6 +238,26 @@ static int insn_jump_into_range(struct insn *insn, unsigned long start, int len) return (start <= target && target <= start + len); } +static int insn_is_indirect_jump(struct insn *insn) +{ + int ret = __insn_is_indirect_jump(insn); + +#ifdef CONFIG_RETPOLINE + /* + * Jump to x86_indirect_thunk_* is treated as an indirect jump. + * Note that even with CONFIG_RETPOLINE=y, the kernel compiled with + * older gcc may use indirect jump. So we add this check instead of + * replace indirect-jump check. + */ + if (!ret) + ret = insn_jump_into_range(insn, + (unsigned long)__indirect_thunk_start, + (unsigned long)__indirect_thunk_end - + (unsigned long)__indirect_thunk_start); +#endif + return ret; +} + /* Decode whole function to ensure any instructions don't jump into target */ static int can_optimize(unsigned long paddr) {