From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEB1DC433E0 for ; Mon, 13 Jul 2020 11:42:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 60C7820738 for ; Mon, 13 Jul 2020 11:42:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chrisdown.name header.i=@chrisdown.name header.b="e1FeX0GR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 60C7820738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chrisdown.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 102CC8D0003; Mon, 13 Jul 2020 07:42:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B1238D0001; Mon, 13 Jul 2020 07:42:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE1578D0003; Mon, 13 Jul 2020 07:42:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id DA6328D0001 for ; Mon, 13 Jul 2020 07:42:37 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 99834181AEF00 for ; Mon, 13 Jul 2020 11:42:37 +0000 (UTC) X-FDA: 77032865154.04.power26_3c0e88826ee8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 6B7028005A60 for ; Mon, 13 Jul 2020 11:42:37 +0000 (UTC) X-HE-Tag: power26_3c0e88826ee8 X-Filterd-Recvd-Size: 8242 Received: from mail-ej1-f68.google.com (mail-ej1-f68.google.com [209.85.218.68]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jul 2020 11:42:36 +0000 (UTC) Received: by mail-ej1-f68.google.com with SMTP id dr13so16683818ejc.3 for ; Mon, 13 Jul 2020 04:42:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chrisdown.name; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=asIUMaHIscmVUOt9O3PjjLWYy2UNKC6B4ByL+UWtDFY=; b=e1FeX0GR+KD623Ybe45d+HmtHhrVBq/5lHLV88DeXVo5dTMNF7JFmFa8IE/NcRmlIM kW3MDiMLjPHtp+vFl/UhNCB7LrIgkpwbXdS7xi3bcM5jqM5WCnz/tfSLQFDQeb0Awkg6 FF7K4opZLmhkbQILg412dwpU2CrRZiOpa9ceg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=asIUMaHIscmVUOt9O3PjjLWYy2UNKC6B4ByL+UWtDFY=; b=sWrloTztOaXw+Z8TpQ/6V2cCS19UCGF5rCz35CW0fwGNXr7ATSk5xCsS8UN1qmF1JG 3MkpgpibvZUy3s95iw7LNkFqtgwd1tv+hJwJSqC9LAEEGr5gc89USxAA8rLCS1xMCRm3 oS5Z3AvOVRXjgtn70fjQniJWVc8RD+3NVSVr/UITCNjSXAuq+WkY1Kw9zRPCm5/1OcML bTkop0jYXROc2X8IyxubjICahxvwcSsrsJkZ/p3BlcYcrD+lYdPRdcbsVv8dalyMTxUl Bz3B49UhLkluPVgEXJ/BFuAFQ5FNOotLoIE1AoBABjoIfbr2vlBIovD/77ZNJ33MuBCG 3xFw== X-Gm-Message-State: AOAM532RGgL/x202qHJEX5BMOxpJT5UcSrHwXFuYhywz9bI1lRZKcinV +CukUHDDk0ZcOzrbiRQGXP8rvQ== X-Google-Smtp-Source: ABdhPJyEII+P/2ZM/6AJWX/HZTCj9WuOpxMw0/9G1G/WCLiRXlRknK18NtlPg7X4YJGjehL2raIBIA== X-Received: by 2002:a17:906:27c9:: with SMTP id k9mr71713920ejc.74.1594640555870; Mon, 13 Jul 2020 04:42:35 -0700 (PDT) Received: from localhost ([2620:10d:c093:400::5:ef88]) by smtp.gmail.com with ESMTPSA id bs18sm11363158edb.38.2020.07.13.04.42.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Jul 2020 04:42:35 -0700 (PDT) Date: Mon, 13 Jul 2020 12:42:35 +0100 From: Chris Down To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 1/2] mm, memcg: reclaim more aggressively before high allocator throttling Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.14.5 (2020-06-23) X-Rspamd-Queue-Id: 6B7028005A60 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In Facebook production, we've seen cases where cgroups have been put into allocator throttling even when they appear to have a lot of slack file caches which should be trivially reclaimable. Looking more closely, the problem is that we only try a single cgroup reclaim walk for each return to usermode before calculating whether or not we should throttle. This single attempt doesn't produce enough pressure to shrink for cgroups with a rapidly growing amount of file caches prior to entering allocator throttling. As an example, we see that threads in an affected cgroup are stuck in allocator throttling: # for i in $(cat cgroup.threads); do > grep over_high "/proc/$i/stack" > done [<0>] mem_cgroup_handle_over_high+0x10b/0x150 [<0>] mem_cgroup_handle_over_high+0x10b/0x150 [<0>] mem_cgroup_handle_over_high+0x10b/0x150 ...however, there is no I/O pressure reported by PSI, despite a lot of slack file pages: # cat memory.pressure some avg10=78.50 avg60=84.99 avg300=84.53 total=5702440903 full avg10=78.50 avg60=84.99 avg300=84.53 total=5702116959 # cat io.pressure some avg10=0.00 avg60=0.00 avg300=0.00 total=78051391 full avg10=0.00 avg60=0.00 avg300=0.00 total=78049640 # grep _file memory.stat inactive_file 1370939392 active_file 661635072 This patch changes the behaviour to retry reclaim either until the current task goes below the 10ms grace period, or we are making no reclaim progress at all. In the latter case, we enter reclaim throttling as before. To a user, there's no intuitive reason for the reclaim behaviour to differ from hitting memory.high as part of a new allocation, as opposed to hitting memory.high because someone lowered its value. As such this also brings an added benefit: it unifies the reclaim behaviour between the two. There's precedent for this behaviour: we already do reclaim retries when writing to memory.{high,max}, in max reclaim, and in the page allocator itself. Signed-off-by: Chris Down Cc: Andrew Morton Cc: Johannes Weiner Cc: Tejun Heo Cc: Michal Hocko --- mm/memcontrol.c | 42 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 37 insertions(+), 5 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 0145a77aa074..d4b0d8af3747 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -73,6 +73,7 @@ EXPORT_SYMBOL(memory_cgrp_subsys); struct mem_cgroup *root_mem_cgroup __read_mostly; +/* The number of times we should retry reclaim failures before giving up. */ #define MEM_CGROUP_RECLAIM_RETRIES 5 /* Socket memory accounting disabled? */ @@ -2365,18 +2366,23 @@ static int memcg_hotplug_cpu_dead(unsigned int cpu) return 0; } -static void reclaim_high(struct mem_cgroup *memcg, - unsigned int nr_pages, - gfp_t gfp_mask) +static unsigned long reclaim_high(struct mem_cgroup *memcg, + unsigned int nr_pages, + gfp_t gfp_mask) { + unsigned long nr_reclaimed = 0; + do { if (page_counter_read(&memcg->memory) <= READ_ONCE(memcg->memory.high)) continue; memcg_memory_event(memcg, MEMCG_HIGH); - try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, true); + nr_reclaimed += try_to_free_mem_cgroup_pages(memcg, nr_pages, + gfp_mask, true); } while ((memcg = parent_mem_cgroup(memcg)) && !mem_cgroup_is_root(memcg)); + + return nr_reclaimed; } static void high_work_func(struct work_struct *work) @@ -2532,16 +2538,32 @@ void mem_cgroup_handle_over_high(void) { unsigned long penalty_jiffies; unsigned long pflags; + unsigned long nr_reclaimed; unsigned int nr_pages = current->memcg_nr_pages_over_high; + int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; struct mem_cgroup *memcg; + bool in_retry = false; if (likely(!nr_pages)) return; memcg = get_mem_cgroup_from_mm(current->mm); - reclaim_high(memcg, nr_pages, GFP_KERNEL); current->memcg_nr_pages_over_high = 0; +retry_reclaim: + /* + * The allocating task should reclaim at least the batch size, but for + * subsequent retries we only want to do what's necessary to prevent oom + * or breaching resource isolation. + * + * This is distinct from memory.max or page allocator behaviour because + * memory.high is currently batched, whereas memory.max and the page + * allocator run every time an allocation is made. + */ + nr_reclaimed = reclaim_high(memcg, + in_retry ? SWAP_CLUSTER_MAX : nr_pages, + GFP_KERNEL); + /* * memory.high is breached and reclaim is unable to keep up. Throttle * allocators proactively to slow down excessive growth. @@ -2568,6 +2590,16 @@ void mem_cgroup_handle_over_high(void) if (penalty_jiffies <= HZ / 100) goto out; + /* + * If reclaim is making forward progress but we're still over + * memory.high, we want to encourage that rather than doing allocator + * throttling. + */ + if (nr_reclaimed || nr_retries--) { + in_retry = true; + goto retry_reclaim; + } + /* * If we exit early, we're guaranteed to die (since * schedule_timeout_killable sets TASK_KILLABLE). This means we don't -- 2.27.0