From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC8E8C54EAA for ; Mon, 30 Jan 2023 18:37:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238289AbjA3Shv (ORCPT ); Mon, 30 Jan 2023 13:37:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235170AbjA3Shb (ORCPT ); Mon, 30 Jan 2023 13:37:31 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3D96B7AAF; Mon, 30 Jan 2023 10:36:42 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DDC211FB; Mon, 30 Jan 2023 10:37:23 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.12.187]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CBF2C3F71E; Mon, 30 Jan 2023 10:36:39 -0800 (PST) Date: Mon, 30 Jan 2023 18:36:32 +0000 From: Mark Rutland To: Peter Zijlstra Cc: Josh Poimboeuf , Petr Mladek , Joe Lawrence , kvm@vger.kernel.org, "Michael S. Tsirkin" , netdev@vger.kernel.org, Jiri Kosina , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, "Seth Forshee (DigitalOcean)" , live-patching@vger.kernel.org, Miroslav Benes Subject: Re: [PATCH 0/2] vhost: improve livepatch switching for heavily loaded vhost worker kthreads Message-ID: References: <20230120-vhost-klp-switching-v1-0-7c2b65519c43@kernel.org> <20230127044355.frggdswx424kd5dq@treble> <20230127165236.rjcp6jm6csdta6z3@treble> <20230127170946.zey6xbr4sm4kvh3x@treble> <20230127221131.sdneyrlxxhc4h3fa@treble> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 30, 2023 at 01:40:18PM +0100, Peter Zijlstra wrote: > On Fri, Jan 27, 2023 at 02:11:31PM -0800, Josh Poimboeuf wrote: > > @@ -8500,8 +8502,10 @@ EXPORT_STATIC_CALL_TRAMP(might_resched); > > static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched); > > int __sched dynamic_cond_resched(void) > > { > > - if (!static_branch_unlikely(&sk_dynamic_cond_resched)) > > + if (!static_branch_unlikely(&sk_dynamic_cond_resched)) { > > + klp_sched_try_switch(); > > return 0; > > + } > > return __cond_resched(); > > } > > EXPORT_SYMBOL(dynamic_cond_resched); > > I would make the klp_sched_try_switch() not depend on > sk_dynamic_cond_resched, because __cond_resched() is not a guaranteed > pass through __schedule(). > > But you'll probably want to check with Mark here, this all might > generate crap code on arm64. IIUC here klp_sched_try_switch() is a static call, so on arm64 this'll generate at least a load, a conditional branch, and an indirect branch. That's not ideal, but I'd have to benchmark it to find out whether it's a significant overhead relative to the baseline of PREEMPT_DYNAMIC. For arm64 it'd be a bit nicer to have another static key check, and a call to __klp_sched_try_switch(). That way the static key check gets turned into a NOP in the common case, and the call to __klp_sched_try_switch() can be a direct call (potentially a tail-call if we made it return 0). Thanks, Mark.