From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4E7DC433FE for ; Tue, 15 Nov 2022 07:08:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232498AbiKOHIw (ORCPT ); Tue, 15 Nov 2022 02:08:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231602AbiKOHIt (ORCPT ); Tue, 15 Nov 2022 02:08:49 -0500 Received: from mail-vk1-xa35.google.com (mail-vk1-xa35.google.com [IPv6:2607:f8b0:4864:20::a35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCA05124 for ; Mon, 14 Nov 2022 23:08:48 -0800 (PST) Received: by mail-vk1-xa35.google.com with SMTP id g4so6136920vkk.6 for ; Mon, 14 Nov 2022 23:08:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=noh/j1I2Nfqmg70jSTXPnVlIuOy6GPQXk/DeN87GQSc=; b=lAr927BTluZcNXMKOC0MimOAVuXGSqAeMbyqx8/z/YUM5s6un9kPKNbevgVOeVyNzp y4K89hIVCru8LGLs2H/RB59XSLlvuSt1x8KcxBDLtvK6lRSJGk5U8m298v3sfYp411uy MDGUrmmpS3gawg4x28r+zm27HHHxSIybpR9rAWsbfSTEM3eV5qvjGL8MO9wYtfnvC8Hb vk505YIa4DrODRClGQy6iK7K0z7MkvRJHzZsoovvzSpopQpy4b/rzHd6U/OhVp32A49P zIQe/O5Od2sOYr5qSgwwaAici055IvE8Gwy5sbCiUuot03OE2NNuRTFYa9NJvLN+o5ux 1BiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=noh/j1I2Nfqmg70jSTXPnVlIuOy6GPQXk/DeN87GQSc=; b=3v1t3lnlFkt1DrQcP7745lSa22/a77mIrgH0UdwcoiueHHWyNTScBFciEy+/WfI+T/ ZboaLLUV+M1/wTb1GVKNDTfqatJX3CSq3tYfLTFV5Pmskn0/rmLqcw4T+NsgY4ouwVqK eV1GVmXumycWzl33Uc77dG6yIVHk6DipoOqfPLAmALXlBsibFkDLwsQaofZsw+mBEm9L 7eD5m8ZIq68OtP8J/5wLymh9Pnw+8K/bVL7Fr4jpupBjRCLs52aGkZ0pwtUoxp0QEmOi L7U9mPZAVAZneyrUU8DAZFXDL+TGIt7IUv38o+Cv+5t/cDyKzo+jG0qx+cXaomxDKSXQ /b6w== X-Gm-Message-State: ANoB5pllgKwhyqBfojk0lx+pS2xKNwm2w4309V0jdnIcyKnOHRtlSErd tFPM9SdzhERhCgyo9s5yOiMWBdFb9sMLyEfqbpmAyO85C94X X-Google-Smtp-Source: AA0mqf6oqbxUdHrks669LJv6lUsL530VCRZ3XXoV4DB9AH6Eq97LjdMQraK4EswSSUrlvDWNmKKtjIxGXMBgFi3rVZA= X-Received: by 2002:a05:6122:2007:b0:3b8:3b0b:222f with SMTP id l7-20020a056122200700b003b83b0b222fmr8510224vkd.36.1668496127733; Mon, 14 Nov 2022 23:08:47 -0800 (PST) MIME-Version: 1.0 References: <20221003232033.3404802-1-jstultz@google.com> <20221003232033.3404802-3-jstultz@google.com> In-Reply-To: From: John Stultz Date: Mon, 14 Nov 2022 23:08:36 -0800 Message-ID: Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs To: Alexander Gordeev Cc: Joel Fernandes , LKML , "Connor O'Brien" , John Dias , Rick Yiu , John Kacur , Qais Yousef , Chris Redpath , Abhijeet Dharmapurikar , Peter Zijlstra , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Thomas Gleixner , kernel-team@android.com, "J . Avila" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Oct 23, 2022 at 12:45 AM Alexander Gordeev wrote: > > On Sat, Oct 22, 2022 at 06:34:37PM +0000, Joel Fernandes wrote: > > > In my reading of your approach if you find a way to additionally > > > indicate long softirqs being handled by the remote ksoftirqd, it > > > would cover all obvious/not-corner cases. > > > > How will that help? The long softirq executing inside ksoftirqd will disable > > preemption and prevent any RT task from executing. > > Right. So the check to deem a remote CPU unfit would (logically) look like this: > > (active | pending | ksoftirqd) & LONG_SOFTIRQ_MASK > Alexander, Apologies for the late response here, some other work took priority for a bit. Thanks again for the feedback. But I wanted to follow up on your suggestion here, as I'm not quite sure I see why we need the separate ksoftirqd bitmask here? As run_ksoftirqd() basically looks at the pending set and calls __do_softirq() which then moves the bits from the pending mask to active mask while they are being run. So (pending|active)&LONG_SOFTIRQ_MASK seems like it should be a sufficient check regardless of if the remote cpu is in softirq or ksoftirqd, no? thanks -john