From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88DF8C433ED for ; Sun, 25 Apr 2021 03:14:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51BF761494 for ; Sun, 25 Apr 2021 03:14:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229708AbhDYDPW (ORCPT ); Sat, 24 Apr 2021 23:15:22 -0400 Received: from mga03.intel.com ([134.134.136.65]:45425 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229514AbhDYDPV (ORCPT ); Sat, 24 Apr 2021 23:15:21 -0400 IronPort-SDR: ZMcN30g/T7HSCq49lMUjQelANJU6Eb2rHj+Di9Omn93cTs5vEilLErx4FyH8SdCGjhT0o3hTOR WVT6YPwd1+Mg== X-IronPort-AV: E=McAfee;i="6200,9189,9964"; a="196267711" X-IronPort-AV: E=Sophos;i="5.82,249,1613462400"; d="scan'208";a="196267711" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2021 20:14:41 -0700 IronPort-SDR: wsHUTaW3CXFpl8ncYrwEjcsh8LP+NdcQWH3nyouj+W7Q9aNKLlGKVxhgt+msbM0L6q0aTbmR1U PXIkscNYKHCQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,249,1613462400"; d="scan'208";a="428919636" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.147.94]) by orsmga008.jf.intel.com with ESMTP; 24 Apr 2021 20:14:37 -0700 Date: Sun, 25 Apr 2021 11:14:37 +0800 From: Feng Tang To: "Paul E. McKenney" Cc: Xing Zhengjun , Thomas Gleixner , John Stultz , Stephen Boyd , Jonathan Corbet , Mark Rutland , Marc Zyngier , Andi Kleen , Chris Mason , LKML , lkp@lists.01.org, lkp@intel.com Subject: Re: [LKP] Re: [clocksource] 6c52b5f3cf: stress-ng.opcode.ops_per_sec -14.4% regression Message-ID: <20210425031437.GA38485@shbuild999.sh.intel.com> References: <20210421134224.GR975577@paulmck-ThinkPad-P17-Gen-1> <20210422074126.GA85095@shbuild999.sh.intel.com> <20210422142454.GD975577@paulmck-ThinkPad-P17-Gen-1> <20210422165743.GA162649@paulmck-ThinkPad-P17-Gen-1> <20210423061115.GA62813@shbuild999.sh.intel.com> <20210423140254.GM975577@paulmck-ThinkPad-P17-Gen-1> <20210424122920.GB85095@shbuild999.sh.intel.com> <20210424175322.GS975577@paulmck-ThinkPad-P17-Gen-1> <20210425021438.GA2942@shbuild999.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210425021438.GA2942@shbuild999.sh.intel.com> User-Agent: Mutt/1.5.24 (2015-08-30) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Apr 25, 2021 at 10:14:38AM +0800, Feng Tang wrote: > On Sat, Apr 24, 2021 at 10:53:22AM -0700, Paul E. McKenney wrote: > > And if your 2/2 goes in, those who still distrust TSC will simply > > revert it. In their defense, their distrust was built up over a very > > long period of time for very good reasons. > > > > > > This last sentence is not a theoretical statement. In the past, I have > > > > suggested using the existing "tsc=reliable" kernel boot parameter, > > > > which disables watchdogs on TSC, similar to your patch 2/2 above. > > > > The discussion was short and that boot parameter was not set. And the > > > > discussion motivated to my current clocksource series. ;-) > > > > > > > > I therefore suspect that someone will want a "tsc=unreliable" boot > > > > parameter (or similar) to go with your patch 2/2. > > > > > > Possibly :) > > > > > > But I wonder if tsc is disabled on that 'large system', what will be > > > used instead? HPET is known to be much slower for clocksource, as shown > > > in this regression report :) not mentioning the 'acpi_pm' timer. > > > > Indeed, the default switch to HPET often causes the system to be taken > > out of service due to the resulting performance shortfall. There is > > of course some automated recovery, and no, I am not familiar with the > > details, but I suspect that a simple reboot is an early recovery step. > > However, if the problem were to persist, the system would of course be > > considered to be permanently broken. > > Thanks for the info, if a sever is taken out of service just because > of a false alarm of tsc, then it's a big waste! > > > > Again, I want to know the real tsc unstable case. I have spent lots > > > of time searching these info from git logs and mail archives before > > > writing the patches. > > > > So do I, which is why I put together this patch series. My employer has > > a fairly strict upstream-first for things like this which are annoyances > > that are likely hiding other bugs, but which are not causing significant > > outages, which was of course the motivation for the fault-injection > > patches. > > > > As I said earlier, it would have been very helpful to you for a patch > > series like this to have been applied many years ago. If it had been, > > we would already have the failure-rate data that you requested. And of > > course if that failure-rate data indicated that TSC was reliable, there > > would be far fewer people still distrusting TSC. > > Yes, if they can share the detailed info (like what's the 'watchdog') > and debug info, it can enable people to debug and root cause the > problem to be a false alarm or a real silicon platform. Personally, for > newer platforms I tend to trust tsc much more than other clocksources. I understand people may 'distrust' tsc, after seeing that 'tsc unstable' cases. But for 'newer platforms', if the unstable was judged by hpet, acpi_pm_timer or the software 'refined-jiffies', then it could possibly be just a false alarm, and that's not too difficult to be root caused. And if there is a real evidence of a broken tsc case, then the distrust is not just in impression from old days :) Thanks, Feng From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============6149416197679881129==" MIME-Version: 1.0 From: Feng Tang To: lkp@lists.01.org Subject: Re: [clocksource] 6c52b5f3cf: stress-ng.opcode.ops_per_sec -14.4% regression Date: Sun, 25 Apr 2021 11:14:37 +0800 Message-ID: <20210425031437.GA38485@shbuild999.sh.intel.com> In-Reply-To: <20210425021438.GA2942@shbuild999.sh.intel.com> List-Id: --===============6149416197679881129== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Sun, Apr 25, 2021 at 10:14:38AM +0800, Feng Tang wrote: > On Sat, Apr 24, 2021 at 10:53:22AM -0700, Paul E. McKenney wrote: > > And if your 2/2 goes in, those who still distrust TSC will simply > > revert it. In their defense, their distrust was built up over a very > > long period of time for very good reasons. > > = > > > > This last sentence is not a theoretical statement. In the past, I = have > > > > suggested using the existing "tsc=3Dreliable" kernel boot parameter, > > > > which disables watchdogs on TSC, similar to your patch 2/2 above. > > > > The discussion was short and that boot parameter was not set. And = the > > > > discussion motivated to my current clocksource series. ;-) > > > > = > > > > I therefore suspect that someone will want a "tsc=3Dunreliable" boot > > > > parameter (or similar) to go with your patch 2/2. > > > = > > > Possibly :) > > > = > > > But I wonder if tsc is disabled on that 'large system', what will be > > > used instead? HPET is known to be much slower for clocksource, as sho= wn > > > in this regression report :) not mentioning the 'acpi_pm' timer. = > > = > > Indeed, the default switch to HPET often causes the system to be taken > > out of service due to the resulting performance shortfall. There is > > of course some automated recovery, and no, I am not familiar with the > > details, but I suspect that a simple reboot is an early recovery step. > > However, if the problem were to persist, the system would of course be > > considered to be permanently broken. > = > Thanks for the info, if a sever is taken out of service just because > of a false alarm of tsc, then it's a big waste! > = > > > Again, I want to know the real tsc unstable case. I have spent lots > > > of time searching these info from git logs and mail archives before > > > writing the patches. > > = > > So do I, which is why I put together this patch series. My employer has > > a fairly strict upstream-first for things like this which are annoyances > > that are likely hiding other bugs, but which are not causing significant > > outages, which was of course the motivation for the fault-injection > > patches. > > = > > As I said earlier, it would have been very helpful to you for a patch > > series like this to have been applied many years ago. If it had been, > > we would already have the failure-rate data that you requested. And of > > course if that failure-rate data indicated that TSC was reliable, there > > would be far fewer people still distrusting TSC. > = > Yes, if they can share the detailed info (like what's the 'watchdog') > and debug info, it can enable people to debug and root cause the > problem to be a false alarm or a real silicon platform. Personally, for > newer platforms I tend to trust tsc much more than other clocksources. = I understand people may 'distrust' tsc, after seeing that 'tsc unstable' cases. But for 'newer platforms', if the unstable was judged by hpet, acpi_pm_timer or the software 'refined-jiffies', then it could possibly be just a false alarm, and that's not too difficult to be root caused. And if there is a real evidence of a broken tsc case, then the distrust is not just in impression from old days :) = Thanks, Feng --===============6149416197679881129==--