From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26AF1C4321E for ; Fri, 20 May 2022 18:37:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352945AbiETShk (ORCPT ); Fri, 20 May 2022 14:37:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352940AbiETShe (ORCPT ); Fri, 20 May 2022 14:37:34 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B322195935 for ; Fri, 20 May 2022 11:37:33 -0700 (PDT) Received: from pps.filterd (m0109334.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24KHSOQJ018095 for ; Fri, 20 May 2022 11:37:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=NA8OmBU4jiEnjso+uqEe9kZ6Ah3dTXtaz03vpcJX2KM=; b=BfxRSx44mZzcgBh/2gmckixCs+DHoNRq1wHb7rRiN4nNCyhugiHjJmS3dr/8jz0cnAEY DGJBPKFhc/uBEa7Kc/HceyjKqdWyv5zrs4PnEmGmAvunGadjfRXHQzK1Pshbudnkj93R lsGIUZsiB6FIPUli3RwcG7s+B6Wo9L5yico= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3g5xexdvwn-7 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 20 May 2022 11:37:33 -0700 Received: from twshared8307.18.frc3.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Fri, 20 May 2022 11:37:27 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id 6B6D3F5E5B25; Fri, 20 May 2022 11:37:16 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , Subject: [RFC PATCH v4 03/17] mm: Prepare balance_dirty_pages() for async buffered writes Date: Fri, 20 May 2022 11:36:32 -0700 Message-ID: <20220520183646.2002023-4-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220520183646.2002023-1-shr@fb.com> References: <20220520183646.2002023-1-shr@fb.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-FB-Internal: Safe Content-Type: text/plain X-Proofpoint-ORIG-GUID: -fIn7fcBRmT_xKzZUSKJWSkJJL3MVO2M X-Proofpoint-GUID: -fIn7fcBRmT_xKzZUSKJWSkJJL3MVO2M X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-20_06,2022-05-20_02,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Jan Kara If balance_dirty_pages() gets called for async buffered write, we don't want to wait. Instead we need to indicate to the caller that throttling is needed so that it can stop writing and offload the rest of the write to a context that can block. Signed-off-by: Jan Kara Signed-off-by: Stefan Roesch --- include/linux/writeback.h | 4 ++++ mm/page-writeback.c | 12 +++++++++--- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index fec248ab1fec..a9114c5090e9 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -372,6 +372,10 @@ void global_dirty_limits(unsigned long *pbackground,= unsigned long *pdirty); unsigned long wb_calc_thresh(struct bdi_writeback *wb, unsigned long thr= esh); =20 void wb_update_bandwidth(struct bdi_writeback *wb); + +/* Invoke balance dirty pages in async mode. */ +#define BDP_ASYNC 0x0001 + void balance_dirty_pages_ratelimited(struct address_space *mapping); bool wb_over_bg_thresh(struct bdi_writeback *wb); =20 diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 89dcc7d8395a..7a320fd2ad33 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1545,8 +1545,8 @@ static inline void wb_dirty_limits(struct dirty_thr= ottle_control *dtc) * If we're over `background_thresh' then the writeback threads are woke= n to * perform some writeout. */ -static void balance_dirty_pages(struct bdi_writeback *wb, - unsigned long pages_dirtied) +static int balance_dirty_pages(struct bdi_writeback *wb, + unsigned long pages_dirtied, unsigned int flags) { struct dirty_throttle_control gdtc_stor =3D { GDTC_INIT(wb) }; struct dirty_throttle_control mdtc_stor =3D { MDTC_INIT(wb, &gdtc_stor)= }; @@ -1566,6 +1566,7 @@ static void balance_dirty_pages(struct bdi_writebac= k *wb, struct backing_dev_info *bdi =3D wb->bdi; bool strictlimit =3D bdi->capabilities & BDI_CAP_STRICTLIMIT; unsigned long start_time =3D jiffies; + int ret =3D 0; =20 for (;;) { unsigned long now =3D jiffies; @@ -1794,6 +1795,10 @@ static void balance_dirty_pages(struct bdi_writeba= ck *wb, period, pause, start_time); + if (flags & BDP_ASYNC) { + ret =3D -EAGAIN; + break; + } __set_current_state(TASK_KILLABLE); wb->dirty_sleep =3D now; io_schedule_timeout(pause); @@ -1825,6 +1830,7 @@ static void balance_dirty_pages(struct bdi_writebac= k *wb, if (fatal_signal_pending(current)) break; } + return ret; } =20 static DEFINE_PER_CPU(int, bdp_ratelimits); @@ -1906,7 +1912,7 @@ void balance_dirty_pages_ratelimited(struct address= _space *mapping) preempt_enable(); =20 if (unlikely(current->nr_dirtied >=3D ratelimit)) - balance_dirty_pages(wb, current->nr_dirtied); + balance_dirty_pages(wb, current->nr_dirtied, 0); =20 wb_put(wb); } --=20 2.30.2