From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EBCAC07E99 for ; Mon, 12 Jul 2021 03:42:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA1E661042 for ; Mon, 12 Jul 2021 03:42:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA1E661042 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CAF6D6B0095; Sun, 11 Jul 2021 23:42:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C5FB36B0098; Sun, 11 Jul 2021 23:42:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B282A6B0099; Sun, 11 Jul 2021 23:42:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id 903EC6B0095 for ; Sun, 11 Jul 2021 23:42:19 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AD64C8248047 for ; Mon, 12 Jul 2021 03:42:18 +0000 (UTC) X-FDA: 78352537956.06.7CB33FD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id 6865F9000247 for ; Mon, 12 Jul 2021 03:42:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=UnDDbhEQMea2tvMI1qXkB0SWNo8nHGa0hDE1opkln5o=; b=gopST4/yM9LOmOhMED9ru1bI1x H3wEtWsF2byI24SU+eF5O4COTCc1NmC69z/dljMeBcnnzbYqOIoulZ/EHp7NUKH82NV6wwhrZTwuS ACkob489gb2jc3gPTqdKYIARoRnAm3ry174sXNgH3kYW/TPnGFwxwqQN356RktjmLqOU2aAv/kdzw CBCMINgy8oNxIFMuhMMB6zVCvt9ExH/yIG3OovJVDR302pqsM6cynOWPCpOf1m0XCMK36wH4xvlob Swz6r2E6Y1ld4JJ30phaI0epzYA3sdY1toG1CS/FUy7XNC4Re/YUHvkoiNZJCPVTub0c90v6yWqNM KVXx3qpg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m2moc-00GpBE-Uw; Mon, 12 Jul 2021 03:41:16 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v13 064/137] flex_proportions: Allow N events instead of 1 Date: Mon, 12 Jul 2021 04:05:48 +0100 Message-Id: <20210712030701.4000097-65-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210712030701.4000097-1-willy@infradead.org> References: <20210712030701.4000097-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="gopST4/y"; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: apgwzudjou8t5pk4nuywfzci69d967j7 X-Rspamd-Queue-Id: 6865F9000247 X-Rspamd-Server: rspam01 X-HE-Tag: 1626061338-790236 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When batching events (such as writing back N pages in a single I/O), it is better to do one flex_proportion operation instead of N. There is only one caller of __fprop_inc_percpu_max(), and it's the one we're going to change in the next patch, so rename it instead of adding a compatibility wrapper. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/flex_proportions.h | 9 +++++---- lib/flex_proportions.c | 28 +++++++++++++++++++--------- mm/page-writeback.c | 4 ++-- 3 files changed, 26 insertions(+), 15 deletions(-) diff --git a/include/linux/flex_proportions.h b/include/linux/flex_propor= tions.h index c12df59d3f5f..3e378b1fb0bc 100644 --- a/include/linux/flex_proportions.h +++ b/include/linux/flex_proportions.h @@ -83,9 +83,10 @@ struct fprop_local_percpu { =20 int fprop_local_init_percpu(struct fprop_local_percpu *pl, gfp_t gfp); void fprop_local_destroy_percpu(struct fprop_local_percpu *pl); -void __fprop_inc_percpu(struct fprop_global *p, struct fprop_local_percp= u *pl); -void __fprop_inc_percpu_max(struct fprop_global *p, struct fprop_local_p= ercpu *pl, - int max_frac); +void __fprop_add_percpu(struct fprop_global *p, struct fprop_local_percp= u *pl, + long nr); +void __fprop_add_percpu_max(struct fprop_global *p, + struct fprop_local_percpu *pl, int max_frac, long nr); void fprop_fraction_percpu(struct fprop_global *p, struct fprop_local_percpu *pl, unsigned long *numerator, unsigned long *denominator); @@ -96,7 +97,7 @@ void fprop_inc_percpu(struct fprop_global *p, struct fp= rop_local_percpu *pl) unsigned long flags; =20 local_irq_save(flags); - __fprop_inc_percpu(p, pl); + __fprop_add_percpu(p, pl, 1); local_irq_restore(flags); } =20 diff --git a/lib/flex_proportions.c b/lib/flex_proportions.c index 451543937524..53e7eb1dd76c 100644 --- a/lib/flex_proportions.c +++ b/lib/flex_proportions.c @@ -217,11 +217,12 @@ static void fprop_reflect_period_percpu(struct fpro= p_global *p, } =20 /* Event of type pl happened */ -void __fprop_inc_percpu(struct fprop_global *p, struct fprop_local_percp= u *pl) +void __fprop_add_percpu(struct fprop_global *p, struct fprop_local_percp= u *pl, + long nr) { fprop_reflect_period_percpu(p, pl); - percpu_counter_add_batch(&pl->events, 1, PROP_BATCH); - percpu_counter_add(&p->events, 1); + percpu_counter_add_batch(&pl->events, nr, PROP_BATCH); + percpu_counter_add(&p->events, nr); } =20 void fprop_fraction_percpu(struct fprop_global *p, @@ -253,20 +254,29 @@ void fprop_fraction_percpu(struct fprop_global *p, } =20 /* - * Like __fprop_inc_percpu() except that event is counted only if the gi= ven + * Like __fprop_add_percpu() except that event is counted only if the gi= ven * type has fraction smaller than @max_frac/FPROP_FRAC_BASE */ -void __fprop_inc_percpu_max(struct fprop_global *p, - struct fprop_local_percpu *pl, int max_frac) +void __fprop_add_percpu_max(struct fprop_global *p, + struct fprop_local_percpu *pl, int max_frac, long nr) { if (unlikely(max_frac < FPROP_FRAC_BASE)) { unsigned long numerator, denominator; + s64 tmp; =20 fprop_fraction_percpu(p, pl, &numerator, &denominator); - if (numerator > - (((u64)denominator) * max_frac) >> FPROP_FRAC_SHIFT) + /* Adding 'nr' to fraction exceeds max_frac/FPROP_FRAC_BASE? */ + tmp =3D (u64)denominator * max_frac - + ((u64)numerator << FPROP_FRAC_SHIFT); + if (tmp < 0) { + /* Maximum fraction already exceeded? */ return; + } else if (tmp < nr * (FPROP_FRAC_BASE - max_frac)) { + /* Add just enough for the fraction to saturate */ + nr =3D div_u64(tmp + FPROP_FRAC_BASE - max_frac - 1, + FPROP_FRAC_BASE - max_frac); + } } =20 - __fprop_inc_percpu(p, pl); + __fprop_add_percpu(p, pl, nr); } diff --git a/mm/page-writeback.c b/mm/page-writeback.c index e677e79c7b9b..63c0dd9f8bf7 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -566,8 +566,8 @@ static void wb_domain_writeout_inc(struct wb_domain *= dom, struct fprop_local_percpu *completions, unsigned int max_prop_frac) { - __fprop_inc_percpu_max(&dom->completions, completions, - max_prop_frac); + __fprop_add_percpu_max(&dom->completions, completions, + max_prop_frac, 1); /* First event after period switching was turned off? */ if (unlikely(!dom->period_time)) { /* --=20 2.30.2