From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-f195.google.com ([209.85.216.195]:43583 "EHLO mail-qt0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936275AbeF2T0F (ORCPT ); Fri, 29 Jun 2018 15:26:05 -0400 Received: by mail-qt0-f195.google.com with SMTP id c8-v6so8820241qtp.10 for ; Fri, 29 Jun 2018 12:26:05 -0700 (PDT) From: Josef Bacik To: axboe@kernel.dk, kernel-team@fb.com, linux-block@vger.kernel.org, akpm@linux-foundation.org, hannes@cmpxchg.org, tj@kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Josef Bacik Subject: [PATCH 14/14] skip readahead if the cgroup is congested Date: Fri, 29 Jun 2018 15:25:42 -0400 Message-Id: <20180629192542.26649-15-josef@toxicpanda.com> In-Reply-To: <20180629192542.26649-1-josef@toxicpanda.com> References: <20180629192542.26649-1-josef@toxicpanda.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: From: Josef Bacik We noticed in testing we'd get pretty bad latency stalls under heavy pressure because read ahead would try to do its thing while the cgroup was under severe pressure. If we're under this much pressure we want to do as little IO as possible so we can still make progress on real work if we're a throttled cgroup, so just skip readahead if our group is under pressure. Signed-off-by: Josef Bacik --- mm/readahead.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/mm/readahead.c b/mm/readahead.c index 539bbb6c1fad..cda6b09e0a59 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "internal.h" @@ -500,6 +501,9 @@ void page_cache_sync_readahead(struct address_space *mapping, if (!ra->ra_pages) return; + if (blk_cgroup_congested()) + return; + /* be dumb */ if (filp && (filp->f_mode & FMODE_RANDOM)) { force_page_cache_readahead(mapping, filp, offset, req_size); @@ -550,6 +554,9 @@ page_cache_async_readahead(struct address_space *mapping, if (inode_read_congested(mapping->host)) return; + if (blk_cgroup_congested()) + return; + /* do read-ahead */ ondemand_readahead(mapping, ra, filp, true, offset, req_size); } -- 2.14.3