From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8869C433DF for ; Thu, 9 Jul 2020 04:56:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A7C55206C3 for ; Thu, 9 Jul 2020 04:56:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A7C55206C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2B52B6B0007; Thu, 9 Jul 2020 00:56:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 267236B0008; Thu, 9 Jul 2020 00:56:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1536A6B000A; Thu, 9 Jul 2020 00:56:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id 00E8B6B0007 for ; Thu, 9 Jul 2020 00:56:03 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 93614181AEF00 for ; Thu, 9 Jul 2020 04:56:03 +0000 (UTC) X-FDA: 77017325406.22.verse33_0217df826ec3 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 7168B18038E60 for ; Thu, 9 Jul 2020 04:56:03 +0000 (UTC) X-HE-Tag: verse33_0217df826ec3 X-Filterd-Recvd-Size: 6284 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Thu, 9 Jul 2020 04:56:02 +0000 (UTC) IronPort-SDR: ORfjeUDjrCaBokZ2vkQqEgwdBLGUJ9urkem5UuTvFw8nflDW99JeLM91jDZqFiDE1NRqnZWYrS GaGjg+McgGAw== X-IronPort-AV: E=McAfee;i="6000,8403,9676"; a="212853866" X-IronPort-AV: E=Sophos;i="5.75,330,1589266800"; d="scan'208";a="212853866" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2020 21:56:00 -0700 IronPort-SDR: IvPR29umbfUPvj6t7MskItNErhnlcuvlpD0Z46MoNFUW0F7nsB0ThPBjBGlkrk+cqbze4bMXTT ALuhRsnTCu0g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,330,1589266800"; d="scan'208";a="324103571" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.107]) by orsmga007.jf.intel.com with ESMTP; 08 Jul 2020 21:55:55 -0700 Date: Thu, 9 Jul 2020 12:55:54 +0800 From: Feng Tang To: "Huang, Ying" , Andi Kleen , Qian Cai , Andrew Morton , Michal Hocko Cc: Dennis Zhou , Tejun Heo , Christoph Lameter , kernel test robot , Johannes Weiner , Matthew Wilcox , Mel Gorman , Kees Cook , Luis Chamberlain , Iurii Zaikin , tim.c.chen@intel.com, dave.hansen@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, lkp@lists.01.org Subject: Re: [mm] 4e2c82a409: ltp.overcommit_memory01.fail Message-ID: <20200709045554.GA56190@shbuild999.sh.intel.com> References: <20200705125854.GA66252@shbuild999.sh.intel.com> <20200705155232.GA608@lca.pw> <20200706014313.GB66252@shbuild999.sh.intel.com> <20200706023614.GA1231@lca.pw> <20200706132443.GA34488@shbuild999.sh.intel.com> <20200706133434.GA3483883@tassilo.jf.intel.com> <20200707023829.GA85993@shbuild999.sh.intel.com> <87zh8c7z5i.fsf@yhuang-dev.intel.com> <20200707054120.GC21741@shbuild999.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200707054120.GC21741@shbuild999.sh.intel.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Rspamd-Queue-Id: 7168B18038E60 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jul 07, 2020 at 01:41:20PM +0800, Feng Tang wrote: > On Tue, Jul 07, 2020 at 12:00:09PM +0800, Huang, Ying wrote: > > Feng Tang writes: > > > > > On Mon, Jul 06, 2020 at 06:34:34AM -0700, Andi Kleen wrote: > > >> > ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); > > >> > - if (ret == 0 && write) > > >> > + if (ret == 0 && write) { > > >> > + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER) > > >> > + schedule_on_each_cpu(sync_overcommit_as); > > >> > > >> The schedule_on_each_cpu is not atomic, so the problem could still happen > > >> in that window. > > >> > > >> I think it may be ok if it eventually resolves, but certainly needs > > >> a comment explaining it. Can you do some stress testing toggling the > > >> policy all the time on different CPUs and running the test on > > >> other CPUs and see if the test fails? > > > > > > For the raw test case reported by 0day, this patch passed in 200 times > > > run. And I will read the ltp code and try stress testing it as you > > > suggested. > > > > > > > > >> The other alternative would be to define some intermediate state > > >> for the sysctl variable and only switch to never once the schedule_on_each_cpu > > >> returned. But that's more complexity. > > > > > > One thought I had is to put this schedule_on_each_cpu() before > > > the proc_dointvec_minmax() to do the sync before sysctl_overcommit_memory > > > is really changed. But the window still exists, as the batch is > > > still the larger one. > > > > Can we change the batch firstly, then sync the global counter, finally > > change the overcommit policy? > > These reorderings are really head scratching :) > > I've thought about this before when Qian Cai first reported the warning > message, as kernel had a check: > > VM_WARN_ONCE(percpu_counter_read(&vm_committed_as) < > -(s64)vm_committed_as_batch * num_online_cpus(), > "memory commitment underflow"); > > If the batch is decreased first, the warning will be easier/earlier to be > triggered, so I didn't brought this up when handling the warning message. > > But it might work now, as the warning has been removed. I tested the reorder way, and the test could pass in 100 times run. The new order when changing policy to OVERCOMMIT_NEVER: 1. re-compute the batch ( to the smaller one) 2. do the on_each_cpu sync 3. really change the policy to NEVER. It solves one of previous concern, that after the sync is done on cpuX, but before the whole sync on all cpus are done, there is a window that the percpu-counter could be enlarged again. IIRC Andi had concern about read side cost when doing the sync, my understanding is most of the readers (malloc/free/map/unmap) are using percpu_counter_read_positive, which is a fast path without involving lock. As for the problem itself, I agree with Michal's point, that usually there is no normal case that will change the overcommit_policy too frequently. The code logic is mainly in overcommit_policy_handler(), based on the previous sync fix. please help to review, thanks! int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { int ret; if (write) { int new_policy; struct ctl_table t; t = *table; t.data = &new_policy; ret = proc_dointvec_minmax(&t, write, buffer, lenp, ppos); if (ret) return ret; mm_compute_batch(new_policy); if (new_policy == OVERCOMMIT_NEVER) schedule_on_each_cpu(sync_overcommit_as); sysctl_overcommit_memory = new_policy; } else { ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); } return ret; } - Feng