From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80A20C10F14 for ; Tue, 8 Oct 2019 09:34:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3842E206BB for ; Tue, 8 Oct 2019 09:34:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="BholeKOr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3842E206BB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D79158E0005; Tue, 8 Oct 2019 05:34:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2B168E0003; Tue, 8 Oct 2019 05:34:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C19108E0005; Tue, 8 Oct 2019 05:34:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id 9C1BA8E0003 for ; Tue, 8 Oct 2019 05:34:27 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 25E6920BF3 for ; Tue, 8 Oct 2019 09:34:27 +0000 (UTC) X-FDA: 76020106974.16.shade35_26cb4b192035a X-HE-Tag: shade35_26cb4b192035a X-Filterd-Recvd-Size: 7537 Received: from userp2120.oracle.com (userp2120.oracle.com [156.151.31.85]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Oct 2019 09:34:26 +0000 (UTC) Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x989Nijl019480; Tue, 8 Oct 2019 09:34:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=5N1E+Ah85JLsy3zS1viyh7VRCT9ubJyZpwHXxQqPqFQ=; b=BholeKOr3+dkFJkdAcRBG6EUwN9yZsCNZHjXyoeG8b94DZjqNEb8bXSnMK5uVl91xXFg EjkhK3AI3Zo9refrhTFYN2joPTQ9UM5I6QPw3z9r6ZoWoqE2F0LxBEJ38J0yLncOgFxd 2nOvt2APVpsYoAgsR8FQdxV0PxkANaSV1mYfwkL7/wRcGr1lXlJJPfRJCmrBsc/zqWWz tO0KUNynTJgeTX+5TUWLvpkBNG+C9D82Dvtv11DsSwpGSzzNB65WemSv4g0ei+6kY/+C H4dEEckB8lbp9I8RzEY2acsihqYOrR0YwzZa/w8gDSBPexbxXMfXXchgDlFEzGxTbOvh UA== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2120.oracle.com with ESMTP id 2vektrc1ga-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 08 Oct 2019 09:34:21 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x989N1tf190724; Tue, 8 Oct 2019 09:34:20 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserp3030.oracle.com with ESMTP id 2vg1yvmuqy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 08 Oct 2019 09:34:20 +0000 Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x989YIPv023846; Tue, 8 Oct 2019 09:34:18 GMT Received: from [10.182.69.197] (/10.182.69.197) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 08 Oct 2019 02:34:18 -0700 Subject: Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size To: Michal Hocko Cc: linux-mm@kvack.org, vdavydov.dev@gmail.com, hannes@cmpxchg.org References: <20190905071034.16822-1-honglei.wang@oracle.com> <20191007142805.GM2381@dhcp22.suse.cz> From: Honglei Wang Message-ID: <991b4719-a2a0-9efe-de02-56a928752fe3@oracle.com> Date: Tue, 8 Oct 2019 17:34:03 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20191007142805.GM2381@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9403 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1910080092 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9403 signatures=668684 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1910080092 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/7/19 10:28 PM, Michal Hocko wrote: > On Thu 05-09-19 15:10:34, Honglei Wang wrote: >> lruvec_lru_size() is involving lruvec_page_state_local() to get the >> lru_size in the current code. It's base on lruvec_stat_local.count[] >> of mem_cgroup_per_node. This counter is updated in batch. It won't >> do charge if the number of coming pages doesn't meet the needs of >> MEMCG_CHARGE_BATCH who's defined as 32 now. >> >> The testcase in LTP madvise09[1] fails due to small block memory is >> not charged. It creates a new memcgroup and sets up 32 MADV_FREE >> pages. Then it forks child who will introduce memory pressure in the >> memcgroup. The MADV_FREE pages are expected to be released under the >> pressure, but 32 is not more than MEMCG_CHARGE_BATCH and these pages >> won't be charged in lruvec_stat_local.count[] until some more pages >> come in to satisfy the needs of batch charging. So these MADV_FREE >> pages can't be freed in memory pressure which is a bit conflicted >> with the definition of MADV_FREE. > > The test case is simly wrong. The caching and the batch size is an > internal implementation detail. Moreover MADV_FREE is a _hint_ so all > you can say is that those pages will get freed at some point in time but > you cannot make any assumptions about when that moment happens. > This is a corner case, it makes extremely memory pressure which give the group no chance to satisfy the batch operation. There might be small chance to hit such problem in real workload -- 128K memory is really small in current amount of memory usage. I know exactly what you mean. The batch size is internal implementation detail, this *test case* just happen hit it in black box. >> Getting lru_size base on lru_zone_size of mem_cgroup_per_node which >> is not updated in batch can make it a bit more accurate in similar >> scenario. > > What does that mean? It would be more helpful to describe the code path > which will use this more precise value and what is the effect of that. > How about we describe it like this: Get the lru_size base on lru_zone_size of mem_cgroup_per_node which is not updated via batching can help any related code path get more precise lru size in mem_cgroup case. This makes memory reclaim code won't ignore small blocks of memory(say, less than MEMCG_CHARGE_BATCH pages) in the lru list. For this specific MADV_FREE page case, more precise lru size helps release the pages less than 32 as expected. Thanks, Honglei > As I've said in the previous version, I do not object to the patch > because a more precise lruvec_lru_size sounds like a nice thing as long > as we are not paying a high price for that. Just look at the global case > for mem_cgroup_disabled(). It uses node_page_state and that one is using > per-cpu accounting with regular global value refreshing IIRC. > >> [1] https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise09.c >> >> Signed-off-by: Honglei Wang >> --- >> mm/vmscan.c | 9 +++++---- >> 1 file changed, 5 insertions(+), 4 deletions(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index c77d1e3761a7..c28672460868 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -354,12 +354,13 @@ unsigned long zone_reclaimable_pages(struct zone *zone) >> */ >> unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx) >> { >> - unsigned long lru_size; >> + unsigned long lru_size = 0; >> int zid; >> >> - if (!mem_cgroup_disabled()) >> - lru_size = lruvec_page_state_local(lruvec, NR_LRU_BASE + lru); >> - else >> + if (!mem_cgroup_disabled()) { >> + for (zid = 0; zid < MAX_NR_ZONES; zid++) >> + lru_size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid); >> + } else >> lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru); >> >> for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) { >> -- >> 2.17.0 >