From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED4AAC07E96 for ; Tue, 13 Jul 2021 10:47:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9A5E7610CB for ; Tue, 13 Jul 2021 10:47:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A5E7610CB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 67FF06B0098; Tue, 13 Jul 2021 06:47:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5458F8D0002; Tue, 13 Jul 2021 06:47:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D4226B009B; Tue, 13 Jul 2021 06:47:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id E664D6B0099 for ; Tue, 13 Jul 2021 06:47:18 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EC7C429E1B for ; Tue, 13 Jul 2021 10:47:17 +0000 (UTC) X-FDA: 78357237714.11.8E4851E Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf28.hostedemail.com (Postfix) with ESMTP id 6B5D090000B6 for ; Tue, 13 Jul 2021 10:47:17 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 32D93222AD; Tue, 13 Jul 2021 10:47:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1626173236; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JmOKnva397lBwlIZ8ZD6NAcjmDpiLgVueqHE3SXZKgM=; b=j7bst2i12Q+/37r06jcMgoQzFOMGVi2582XUKlDVahLlBqS5MOP1wT6WlpWLMClNpFpIyS euCXnb4V4iaLdyOh6TnpB9VPOk8aVoNiMV0FsixsBNserchZF/ltoX0F+y6uzfhF6bHPte HY890kwO6MLP7Ao/Oo1wQ5HdgSaQOsE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1626173236; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JmOKnva397lBwlIZ8ZD6NAcjmDpiLgVueqHE3SXZKgM=; b=AMHJRD7157mN9PQgjwjYSXbBpj0vym0OcaFlBpxwV/sEVvdapMThFB3oMHVwyD/uK31L/6 weNIAvhC0mYsllCw== Received: from quack2.suse.cz (unknown [10.100.224.230]) by relay2.suse.de (Postfix) with ESMTP id 2650CA3B8A; Tue, 13 Jul 2021 10:47:16 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 0D3BF1E0BC2; Tue, 13 Jul 2021 12:47:16 +0200 (CEST) From: Jan Kara To: Andrew Morton Cc: , Michael Stapelberg , Wu Fengguang , Jan Kara Subject: [PATCH 3/5] writeback: Fix bandwidth estimate for spiky workload Date: Tue, 13 Jul 2021 12:47:09 +0200 Message-Id: <20210713104716.22868-3-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210713104519.16394-1-jack@suse.cz> References: <20210713104519.16394-1-jack@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6633; h=from:subject; bh=v5EwAFVYfTlnXX0j4ultiIo6F1188yL6naVY4Fi5YEc=; b=owEBbQGS/pANAwAIAZydqgc/ZEDZAcsmYgBg7W8sMS+rq1EdsNjcc6Vr7zjkFfqLUb8MQJ4r07g9 QU9yjpeJATMEAAEIAB0WIQSrWdEr1p4yirVVKBycnaoHP2RA2QUCYO1vLAAKCRCcnaoHP2RA2b4qCA DkMoDXv5rJANLjLCV+iTUNir/y60k87gDH3FOxl73/FkWdOgg/iKhiyLlTkvphOvWtShdP8l8jfBMb ZMdyVHpsW6Kiya8aG9j0yIBqHAdMWW6RQiPRyXFa/9ZwSQg3sB1B7VRC76oWsTfw9acKfW+fuEgMNZ BCVH9lmPPlfQC72KTV5QkeHLoCXmv9dPAzxt/ccX2VDs4I5BqcuQnCOYuPpLnI4bS2SpRJfjrEU4dj rj+c8O7ql6dQGiW9RBBnwwkSdb7hbOzhALozrNK25qfz5UOzNjYUjfMXCqgqyl9yuA9/to9GQXjdW2 IpzYwx/DYjby3jD1HW3BrA8Jz3Xukr X-Developer-Key: i=jack@suse.cz; a=openpgp; fpr=93C6099A142276A28BBE35D815BC833443038D8C Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=j7bst2i1; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=AMHJRD71; spf=pass (imf28.hostedemail.com: domain of jack@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=jack@suse.cz; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: 5qjmesrzd15fxaap13ztr3fs6mp6e4do X-Rspamd-Queue-Id: 6B5D090000B6 X-HE-Tag: 1626173237-727684 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Michael Stapelberg has reported that for workload with short big spikes of writes (GCC linker seem to trigger this frequently) the write throughput is heavily underestimated and tends to steadily sink until it reaches zero. This has rather bad impact on writeback throttling (causing stalls). The problem is that writeback throughput estimate gets updated at most once per 200 ms. One update happens early after we submit pages for writeback (at that point writeout of only small fraction of pages is completed and thus observed throughput is tiny). Next update happens only during the next write spike (updates happen only from inode writeback and dirty throttling code) and if that is more than 1s after previous spike, we decide system was idle and just ignore whatever was written until this moment. Fix the problem by making sure writeback throughput estimate is also updated shortly after writeback completes to get reasonable estimate of throughput for spiky workloads. Link: https://lore.kernel.org/lkml/20210617095309.3542373-1-stapelberg+li= nux@google.com Reported-and-tested-by: Michael Stapelberg Signed-off-by: Jan Kara --- include/linux/backing-dev-defs.h | 1 + include/linux/writeback.h | 1 + mm/backing-dev.c | 10 +++++++++ mm/page-writeback.c | 35 +++++++++++++++++--------------- 4 files changed, 31 insertions(+), 16 deletions(-) diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev= -defs.h index 06fb8e13f6bc..33207004cfde 100644 --- a/include/linux/backing-dev-defs.h +++ b/include/linux/backing-dev-defs.h @@ -143,6 +143,7 @@ struct bdi_writeback { spinlock_t work_lock; /* protects work_list & dwork scheduling */ struct list_head work_list; struct delayed_work dwork; /* work item used for writeback */ + struct delayed_work bw_dwork; /* work item used for bandwidth estimate = */ =20 unsigned long dirty_sleep; /* last wait */ =20 diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 2480322c06a7..cbaef099645e 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -379,6 +379,7 @@ int dirty_writeback_centisecs_handler(struct ctl_tabl= e *table, int write, void global_dirty_limits(unsigned long *pbackground, unsigned long *pdir= ty); unsigned long wb_calc_thresh(struct bdi_writeback *wb, unsigned long thr= esh); =20 +void wb_update_bandwidth(struct bdi_writeback *wb); void balance_dirty_pages_ratelimited(struct address_space *mapping); bool wb_over_bg_thresh(struct bdi_writeback *wb); =20 diff --git a/mm/backing-dev.c b/mm/backing-dev.c index 1bf16b1eed47..be97bb29a10b 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -271,6 +271,14 @@ void wb_wakeup_delayed(struct bdi_writeback *wb) spin_unlock_bh(&wb->work_lock); } =20 +static void wb_update_bandwidth_workfn(struct work_struct *work) +{ + struct bdi_writeback *wb =3D container_of(to_delayed_work(work), + struct bdi_writeback, bw_dwork); + + wb_update_bandwidth(wb); +} + /* * Initial write bandwidth: 100 MB/s */ @@ -303,6 +311,7 @@ static int wb_init(struct bdi_writeback *wb, struct b= acking_dev_info *bdi, spin_lock_init(&wb->work_lock); INIT_LIST_HEAD(&wb->work_list); INIT_DELAYED_WORK(&wb->dwork, wb_workfn); + INIT_DELAYED_WORK(&wb->bw_dwork, wb_update_bandwidth_workfn); wb->dirty_sleep =3D jiffies; =20 err =3D fprop_local_init_percpu(&wb->completions, gfp); @@ -351,6 +360,7 @@ static void wb_shutdown(struct bdi_writeback *wb) mod_delayed_work(bdi_wq, &wb->dwork, 0); flush_delayed_work(&wb->dwork); WARN_ON(!list_empty(&wb->work_list)); + flush_delayed_work(&wb->bw_dwork); } =20 static void wb_exit(struct bdi_writeback *wb) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index e4a381b8944b..eb55c8882db0 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1340,14 +1340,7 @@ static void __wb_update_bandwidth(struct dirty_thr= ottle_control *gdtc, unsigned long dirtied; unsigned long written; =20 - lockdep_assert_held(&wb->list_lock); - - /* - * rate-limit, only update once every 200ms. - */ - if (elapsed < BANDWIDTH_INTERVAL) - return; - + spin_lock(&wb->list_lock); dirtied =3D percpu_counter_read(&wb->stat[WB_DIRTIED]); written =3D percpu_counter_read(&wb->stat[WB_WRITTEN]); =20 @@ -1369,15 +1362,14 @@ static void __wb_update_bandwidth(struct dirty_th= rottle_control *gdtc, wb->dirtied_stamp =3D dirtied; wb->written_stamp =3D written; wb->bw_time_stamp =3D now; + spin_unlock(&wb->list_lock); } =20 -static void wb_update_bandwidth(struct bdi_writeback *wb) +void wb_update_bandwidth(struct bdi_writeback *wb) { struct dirty_throttle_control gdtc =3D { GDTC_INIT(wb) }; =20 - spin_lock(&wb->list_lock); __wb_update_bandwidth(&gdtc, NULL, false); - spin_unlock(&wb->list_lock); } =20 /* Interval after which we consider wb idle and don't estimate bandwidth= */ @@ -1722,11 +1714,8 @@ static void balance_dirty_pages(struct bdi_writeba= ck *wb, wb->dirty_exceeded =3D 1; =20 if (time_is_before_jiffies(wb->bw_time_stamp + - BANDWIDTH_INTERVAL)) { - spin_lock(&wb->list_lock); + BANDWIDTH_INTERVAL)) __wb_update_bandwidth(gdtc, mdtc, true); - spin_unlock(&wb->list_lock); - } =20 /* throttle according to the chosen dtc */ dirty_ratelimit =3D wb->dirty_ratelimit; @@ -2374,7 +2363,13 @@ int do_writepages(struct address_space *mapping, s= truct writeback_control *wbc) cond_resched(); congestion_wait(BLK_RW_ASYNC, HZ/50); } - wb_update_bandwidth(wb); + /* + * Usually few pages are written by now from those we've just submitted + * but if there's constant writeback being submitted, this makes sure + * writeback bandwidth is updated once in a while. + */ + if (time_is_before_jiffies(wb->bw_time_stamp + BANDWIDTH_INTERVAL)) + wb_update_bandwidth(wb); return ret; } =20 @@ -2754,6 +2749,14 @@ static void wb_inode_writeback_start(struct bdi_wr= iteback *wb) static void wb_inode_writeback_end(struct bdi_writeback *wb) { atomic_dec(&wb->writeback_inodes); + /* + * Make sure estimate of writeback throughput gets updated after + * writeback completed. We delay the update by BANDWIDTH_INTERVAL + * (which is the interval other bandwidth updates use for batching) so + * that if multiple inodes end writeback at a similar time, they get + * batched into one bandwidth update. + */ + queue_delayed_work(bdi_wq, &wb->bw_dwork, BANDWIDTH_INTERVAL); } =20 int test_clear_page_writeback(struct page *page) --=20 2.26.2