From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3FAFC433EF for ; Fri, 1 Apr 2022 18:04:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350983AbiDASGP (ORCPT ); Fri, 1 Apr 2022 14:06:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350958AbiDASGM (ORCPT ); Fri, 1 Apr 2022 14:06:12 -0400 Received: from mail-qt1-x82d.google.com (mail-qt1-x82d.google.com [IPv6:2607:f8b0:4864:20::82d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D32A22E9 for ; Fri, 1 Apr 2022 11:04:21 -0700 (PDT) Received: by mail-qt1-x82d.google.com with SMTP id 10so2794791qtz.11 for ; Fri, 01 Apr 2022 11:04:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=WT1tpbzDbSP9t8G1jkgJMnxYn25i+YOBCOQG5IXPFh8=; b=GmHGsqgjUA5ThgaQ/EEn1tKYzNrTGE3cXgpyV10qyAlM0IKCS/ggdPzSAXMnVQGjEQ BbFr/PPpxCUUTBrvUAl3WP2WUPSbMMbm0BuzDL7+NXwNy36mA9psXidT+QMnO5JBZaTg qKa8/0ffzYYSQUe4IsK73NreZXTQhJ6bv7IWFZtjDRTYtvfadqvg0hCK+sK+SkgayMiV qXEWImeyaatLv8joK/00U5NDta1pmrKQrb0x74bmTEbyzuuOlDAog03V1yhOBjPMGB6i ipV089G8IzawUaB8O3LZu45FiF9Lkv9rCYxPG5KbBUsHR4vqLfVcQvNkqre38NxPfuCW cqNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=WT1tpbzDbSP9t8G1jkgJMnxYn25i+YOBCOQG5IXPFh8=; b=UqdEG4/m+nrjEDYY3hzO9zNGDgeFOwhIaCD3Inei5E5F1GFC6OH3Jf3943ywlvFN97 1aFvKpOIiEtmYMFUviP/dzTsmh3m55KwgRnOPmsOVZRNbdKR6bq2CwSY4Gv8MHQ+hOEr Maw/PiLvnjgrIKjF8ASaDTTE8ZowbqdwymNZ7A6wbltsFyPqHj2HtC069/HRWGIpjVX+ g9o/qP6Ey/B+vXMJ4+y4ymU57+nmYIzxJ9Nzbe7sxSzdK+AJjF6ZZdX/MVykX+K897av pYM1zWy3EjIEuIKfS4ukiP3uND0M3I/xJzsc6k1M9G1bBRTrR6ogepzHPdlh6yAad+nz IFXA== X-Gm-Message-State: AOAM5316HSIDNFWeAJpNlLlGjTUAPUEHF+ejYazyIfeuVc0Gbo3u4LPP 4bhS5pA+Nge+GDGLaIRZrmR51aNRK62UVSEGBCg6BHS9NCJrIg== X-Google-Smtp-Source: ABdhPJziwM7EiqcAjKItqmHEsi9uYsDUw8S0qqKX+rkxNhBJiejPhNnC+ybF8Crwy6M9YipadjH9KA7FjhAYo7NqVcQ= X-Received: by 2002:a05:622a:155:b0:2e1:cc60:8947 with SMTP id v21-20020a05622a015500b002e1cc608947mr9325218qtw.243.1648836260945; Fri, 01 Apr 2022 11:04:20 -0700 (PDT) MIME-Version: 1.0 References: <20220330094632.GB6999@xsang-OptiPlex-9020> <7aa67fedb4b6dc9126bc59ee993fa18d0e472475.camel@linux.intel.com> In-Reply-To: <7aa67fedb4b6dc9126bc59ee993fa18d0e472475.camel@linux.intel.com> From: Chen Yu Date: Sat, 2 Apr 2022 02:04:09 +0800 Message-ID: Subject: Re: [sched/fair] ddb3b1126f: hackbench.throughput -25.9% regression To: Tim Chen Cc: kernel test robot , 0day robot , Chen Yu , Walter Mack , LKML , lkp@lists.01.org, Huang Ying , feng.tang@intel.com, zhengjun.xing@linux.intel.com, fengwei.yin@intel.com, Peter Zijlstra , Vincent Guittot , Ingo Molnar , Juri Lelli , Mel Gorman , Aubrey Li Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 31, 2022 at 11:42 AM Tim Chen wrote: > > On Wed, 2022-03-30 at 17:46 +0800, kernel test robot wrote: > > > > Greeting, > > > > FYI, we noticed a -25.9% regression of hackbench.throughput due to commit: > > > > Will try to check the regression seen. > Double check that the regression could be reproduced on top of the latest sched/core branch: parent ("sched/fair: Don't rely on ->exec_start for migration") fbc ("sched/fair: Simple runqueue order on migrate") parent fbc 91107 -40.8% 53897 hackbench.throughput and it is consistent with lkp's original report that the context switch count is much higher with patch applied: 9591919 +510.3% 58534937 hackbench.time.involuntary_context_switches 36451523 +281.5% 1.391e+08 hackbench.time.voluntary_context_switches Considering that this patch 'raises' the priority of the migrated task, by giving it the cfs_rq->min_vruntime, it is possible that the migrated task would preempt the current running task more easily. 0.00 +12.2 12.21 perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function and the patched version has spent more time on enqueue_entity(), which might be caused by setting sched entity hierarchy from leaf to root, which was mentioned in another thread. -- Thanks, Chenyu From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============5652422076791094154==" MIME-Version: 1.0 From: Chen Yu To: lkp@lists.01.org Subject: Re: [sched/fair] ddb3b1126f: hackbench.throughput -25.9% regression Date: Sat, 02 Apr 2022 02:04:09 +0800 Message-ID: In-Reply-To: <7aa67fedb4b6dc9126bc59ee993fa18d0e472475.camel@linux.intel.com> List-Id: --===============5652422076791094154== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Thu, Mar 31, 2022 at 11:42 AM Tim Chen wr= ote: > > On Wed, 2022-03-30 at 17:46 +0800, kernel test robot wrote: > > > > Greeting, > > > > FYI, we noticed a -25.9% regression of hackbench.throughput due to comm= it: > > > > Will try to check the regression seen. > Double check that the regression could be reproduced on top of the latest sched/core branch: parent ("sched/fair: Don't rely on ->exec_start for migration") fbc ("sched/fair: Simple runqueue order on migrate") parent fbc 91107 -40.8% 53897 hackbench.throughput and it is consistent with lkp's original report that the context switch count is much higher with patch applied: 9591919 +510.3% 58534937 hackbench.time.involuntary_context_switches 36451523 +281.5% 1.391e+08 hackbench.time.voluntary_context_switches Considering that this patch 'raises' the priority of the migrated task, by giving it the cfs_rq->min_vruntime, it is possible that the migrated task would preempt the current running task more easily. 0.00 +12.2 12.21 perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_a= ctivate.try_to_wake_up.autoremove_wake_function and the patched version has spent more time on enqueue_entity(), which might be caused by setting sched entity hierarchy from leaf to root, which was mentioned in another thread. -- = Thanks, Chenyu --===============5652422076791094154==--