From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71787C2BB85 for ; Tue, 14 Apr 2020 15:45:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2E8A32075E for ; Tue, 14 Apr 2020 15:45:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rCxuW1p/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2E8A32075E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ACA0F8E002A; Tue, 14 Apr 2020 11:45:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A03008E0007; Tue, 14 Apr 2020 11:45:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CB148E002A; Tue, 14 Apr 2020 11:45:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id 724178E0007 for ; Tue, 14 Apr 2020 11:45:11 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3CB9A824556B for ; Tue, 14 Apr 2020 15:45:11 +0000 (UTC) X-FDA: 76706884422.21.bomb89_10247fbb05b54 X-HE-Tag: bomb89_10247fbb05b54 X-Filterd-Recvd-Size: 4831 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Apr 2020 15:45:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ErK/xbIOnOcQ7JQgeBK+2AfF1E/ypnWchFVSpWPrdt4=; b=rCxuW1p/86sqPCJASOfUAwki8t b7cbWNmUDJpIfDliejIewsBOQOSgsxKA+phmrHciLBwQmV4GCsket34e0ZRB64w25HeYm6jfu0YC0 uhphBu05bHUj8T0FbLZI2n6A02TkCK6/G14E2byrP6pa1DzGiyFlVnJ/Bq4MUPcj/DxXZZPUQIgqd lnXUlGZ9sbDUn+ZHg3PAMmbcX0qehtty3u211+GKsk2EfUFiyjSSNR+kG/emo2OxW4eyduujosBuf fT9l2YKcz2YQVBzVI0BDXAqnRekanwtjcalGR4BLok6M78IPhDjTa6K17ghAW/hIPHUFy5mmp4mv4 8vAIJz0Q==; Received: from hch by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jONjz-0001Gk-T0; Tue, 14 Apr 2020 15:44:47 +0000 Date: Tue, 14 Apr 2020 08:44:47 -0700 From: Christoph Hellwig To: Luis Chamberlain Cc: axboe@kernel.dk, viro@zeniv.linux.org.uk, bvanassche@acm.org, gregkh@linuxfoundation.org, rostedt@goodmis.org, mingo@redhat.com, jack@suse.cz, ming.lei@redhat.com, nstange@suse.de, akpm@linux-foundation.org, mhocko@suse.com, yukuai3@huawei.com, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Omar Sandoval , Hannes Reinecke , Michal Hocko Subject: Re: [PATCH 4/5] mm/swapfile: refcount block and queue before using blkcg_schedule_throttle() Message-ID: <20200414154447.GC25765@infradead.org> References: <20200414041902.16769-1-mcgrof@kernel.org> <20200414041902.16769-5-mcgrof@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200414041902.16769-5-mcgrof@kernel.org> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 14, 2020 at 04:19:01AM +0000, Luis Chamberlain wrote: > block devices are refcounted so to ensure once its final user goes away it > can be cleaned up by the lower layers properly. The block device's > request_queue structure is also refcounted, however, if the last > blk_put_queue() is called under atomic context the block layer has > to defer removal. > > By refcounting the block device during the use of blkcg_schedule_throttle(), > we ensure ensure two things: > > 1) the block device remains available during the call > 2) we ensure avoid having to deal with the fact we're using the > request_queue structure in atomic context, since the last > blk_put_queue() will be called upon disk_release(), *after* > our own bdput(). > > This means this code path is *not* going to remove the request_queue > structure, as we are ensuring some later upper layer disk_release() > will be the one to release the request_queue structure for us. > > Cc: Bart Van Assche > Cc: Omar Sandoval > Cc: Hannes Reinecke > Cc: Nicolai Stange > Cc: Greg Kroah-Hartman > Cc: Michal Hocko > Cc: yu kuai > Signed-off-by: Luis Chamberlain > --- > mm/swapfile.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 6659ab563448..9285ff6030ca 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -3753,6 +3753,7 @@ static void free_swap_count_continuations(struct swap_info_struct *si) > void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node, > gfp_t gfp_mask) > { > + struct block_device *bdev; > struct swap_info_struct *si, *next; > if (!(gfp_mask & __GFP_IO) || !memcg) > return; > @@ -3771,8 +3772,17 @@ void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node, > plist_for_each_entry_safe(si, next, &swap_avail_heads[node], > avail_lists[node]) { > if (si->bdev) { > - blkcg_schedule_throttle(bdev_get_queue(si->bdev), > - true); > + bdev = bdgrab(si->bdev); > + if (!bdev) > + continue; > + /* > + * By adding our own bdgrab() we ensure the queue > + * sticks around until disk_release(), and so we ensure > + * our release of the request_queue does not happen in > + * atomic context. > + */ > + blkcg_schedule_throttle(bdev_get_queue(bdev), true); > + bdput(bdev); I don't understand the atomic part of the comment. How does bdgrab/bdput help us there?