From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B90D9C433EF for ; Tue, 12 Oct 2021 00:32:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A22C60E9C for ; Tue, 12 Oct 2021 00:32:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235753AbhJLAeP (ORCPT ); Mon, 11 Oct 2021 20:34:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235704AbhJLAeD (ORCPT ); Mon, 11 Oct 2021 20:34:03 -0400 Received: from mail-yb1-xb30.google.com (mail-yb1-xb30.google.com [IPv6:2607:f8b0:4864:20::b30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7664C061570 for ; Mon, 11 Oct 2021 17:31:59 -0700 (PDT) Received: by mail-yb1-xb30.google.com with SMTP id s4so42810279ybs.8 for ; Mon, 11 Oct 2021 17:31:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=3S62nvOHjceE7OYCQfU8AzRtrSAtZeU322vT6Q84Ffk=; b=XHQTeHQPS0PjyB1Vm5EaNuYtJemY2DtY3b2aoOM1jPIXuy2FYfGrVklvUz4WycyF3m /+y2RZ6Q1B7r65FNFTUMdJ4zEFFnFQYQOYpkt/rOjtZaMYBobDDFeijXtdhlatRGkqBc gvovPmjfIxB3ikKZKInFvUTrnemtgV0mwbgSImuL+RI7Z5sunNDEbRJnWV+g6R9/ud2B y6b2vdX5PtLT1xzy0BIc9SP9OQX8lHU7pInxDCX2ivCq1pzv1+tRa5x5QdWJDBEwVltd DZnUoCnaCi5VM+SvHwfU2RMBdDfJ7Z3mh8hlRX6GiKnuuDJXEynPuBaXc+keWPqBRhbA yCvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3S62nvOHjceE7OYCQfU8AzRtrSAtZeU322vT6Q84Ffk=; b=qyh6BIQ3HV9zhVd5zg4fHRyex86zcAhazOVzx56HejaweANh5nD/9c6h/kVyu3dCkb 1aBtKjOXGUGKg2XCGndlP4Iysa96EZJzTGSXtZWkoF1A7Zl0vdaMUPrw4YyFCa7cmPLm jW33wxIUsnpPF49NOVojQoc0rHAq8iaPzfnHGhJy7/sGhvKXFQwDEQXyGrKVf0DppFmK XGw8KVDxDYZwycfAGt6aIx6Qrm6tWrn0wKG80R/dr4J/7ssy3+kswBCI3gTncccJquJG dal+J0xR9DMbaf1SqnsjjnivnPrmDRM4Zi388K801rUKU2/8eMISXbV//bFy9BdsaCJW sUzw== X-Gm-Message-State: AOAM531QZ7QlDCBxwGo3Gy2rNgJS/qsM1bYTckaj0xS+RjmZW52ShpUw PKPNUTrLDwMoFVjmgsmh8hTmqVzjUxVhwuKPHCCung== X-Google-Smtp-Source: ABdhPJzlvFm5xgBenGw3cXmywxLZcwEbpxxUcVyY1uvZhTm+uV/Sw7dcxz3GDgSmH0Hnm+l2wjW/unwzv4W279GdJVo= X-Received: by 2002:a25:1346:: with SMTP id 67mr25663864ybt.405.1633998712255; Mon, 11 Oct 2021 17:31:52 -0700 (PDT) MIME-Version: 1.0 References: <20211008000825.1364224-1-joshdon@google.com> In-Reply-To: From: Josh Don Date: Mon, 11 Oct 2021 17:31:41 -0700 Message-ID: Subject: Re: [PATCH] sched/core: forced idle accounting To: Hao Luo Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Joel Fernandes , Vineeth Pillai , linux-kernel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 11, 2021 at 10:33 AM Hao Luo wrote: > > On Thu, Oct 7, 2021 at 5:08 PM Josh Don wrote: > > -void sched_core_dequeue(struct rq *rq, struct task_struct *p) > > +void sched_core_dequeue(struct rq *rq, struct task_struct *p, int flags) > > { > > rq->core->core_task_seq++; > > > > - if (!sched_core_enqueued(p)) > > - return; > > + if (sched_core_enqueued(p)) { > > + rb_erase(&p->core_node, &rq->core_tree); > > + RB_CLEAR_NODE(&p->core_node); > > + } > > > > - rb_erase(&p->core_node, &rq->core_tree); > > - RB_CLEAR_NODE(&p->core_node); > > + /* > > + * Migrating the last task off the cpu, with the cpu in forced idle > > + * state. Reschedule to create an accounting edge for forced idle, > > + * and re-examine whether the core is still in forced idle state. > > + */ > > + if (!(flags & DEQUEUE_SAVE) && rq->nr_running == 1 && > > + rq->core->core_forceidle && rq->curr == rq->idle) > > + resched_curr(rq); > > Resched_curr is probably an unwanted side effect of dequeue. Maybe we > could extract the check and resched_curr out into a function, and call > the function outside of sched_core_dequeue(). In that way, the > interface of dequeue doesn't need to change. This resched is an atypical case; normal load balancing won't steal the last runnable task off a cpu. The main reasons this resched could trigger are: migration due to affinity change, and migration due to sched core doing a cookie_steal. Could bubble this up to deactivate_task(), but seems less brittle to keep this in dequeue() with the check against DEQUEUE_SAVE (since this creates an important accounting edge). Thoughts? > > /* > > @@ -5765,7 +5782,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) > > for_each_cpu_wrap(i, smt_mask, cpu) { > > rq_i = cpu_rq(i); > > > > - if (i != cpu) > > + if (i != cpu && (rq_i != rq->core || !core_clock_updated)) > > update_rq_clock(rq_i); > > Do you mean (rq_i != rq->core && !core_clock_updated)? I thought > rq->core has core_clock updated always. rq->clock is updated on entry to pick_next_task(). rq->core is only updated if rq == rq->core, or if we've done the clock update for rq->core above.