From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B497C678DB for ; Tue, 7 Mar 2023 19:23:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229624AbjCGTX0 (ORCPT ); Tue, 7 Mar 2023 14:23:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233917AbjCGTXE (ORCPT ); Tue, 7 Mar 2023 14:23:04 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB578A54E6 for ; Tue, 7 Mar 2023 11:08:18 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 897C161520 for ; Tue, 7 Mar 2023 19:08:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A26DBC433EF; Tue, 7 Mar 2023 19:08:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1678216098; bh=tTfMyGDqDc3oAtblQ76P4+fkdXV52/l6EcYNpiYCHF0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YlAUL2PB5VzIdjRN2wBGfBlvc8U4ZhruHp2kAx9dWAKdflB1YkdgJn4OOYDtLQudb WzGIP4hI4USAKsmRRtis6LTf3blWCfmpL+XoTGSuYHh0rlvNr5mEz1iOAQj07ukokx vZfyCyZowaxGxgR8EF8zWsCC25zSHEik917FSkFM= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Yang Jihong , "Masami Hiramatsu (Google)" Subject: [PATCH 5.15 475/567] x86/kprobes: Fix __recover_optprobed_insn check optimizing logic Date: Tue, 7 Mar 2023 18:03:31 +0100 Message-Id: <20230307165926.495250939@linuxfoundation.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307165905.838066027@linuxfoundation.org> References: <20230307165905.838066027@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Yang Jihong commit 868a6fc0ca2407622d2833adefe1c4d284766c4c upstream. Since the following commit: commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code") modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe may be in the optimizing or unoptimizing state when op.kp->flags has KPROBE_FLAG_OPTIMIZED and op->list is not empty. The __recover_optprobed_insn check logic is incorrect, a kprobe in the unoptimizing state may be incorrectly determined as unoptimizing. As a result, incorrect instructions are copied. The optprobe_queued_unopt function needs to be exported for invoking in arch directory. Link: https://lore.kernel.org/all/20230216034247.32348-2-yangjihong1@huawei.com/ Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code") Cc: stable@vger.kernel.org Signed-off-by: Yang Jihong Acked-by: Masami Hiramatsu (Google) Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/kprobes/opt.c | 4 ++-- include/linux/kprobes.h | 1 + kernel/kprobes.c | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -46,8 +46,8 @@ unsigned long __recover_optprobed_insn(k /* This function only handles jump-optimized kprobe */ if (kp && kprobe_optimized(kp)) { op = container_of(kp, struct optimized_kprobe, kp); - /* If op->list is not empty, op is under optimizing */ - if (list_empty(&op->list)) + /* If op is optimized or under unoptimizing */ + if (list_empty(&op->list) || optprobe_queued_unopt(op)) goto found; } } --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -349,6 +349,7 @@ extern int proc_kprobes_optimization_han size_t *length, loff_t *ppos); #endif extern void wait_for_kprobe_optimizer(void); +bool optprobe_queued_unopt(struct optimized_kprobe *op); #else static inline void wait_for_kprobe_optimizer(void) { } #endif /* CONFIG_OPTPROBES */ --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -656,7 +656,7 @@ void wait_for_kprobe_optimizer(void) mutex_unlock(&kprobe_mutex); } -static bool optprobe_queued_unopt(struct optimized_kprobe *op) +bool optprobe_queued_unopt(struct optimized_kprobe *op) { struct optimized_kprobe *_op;