From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76448C433E1 for ; Tue, 23 Jun 2020 16:18:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E3112078A for ; Tue, 23 Jun 2020 16:18:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lca.pw header.i=@lca.pw header.b="q5PXizvo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733018AbgFWQSF (ORCPT ); Tue, 23 Jun 2020 12:18:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732185AbgFWQSD (ORCPT ); Tue, 23 Jun 2020 12:18:03 -0400 Received: from mail-qt1-x844.google.com (mail-qt1-x844.google.com [IPv6:2607:f8b0:4864:20::844]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5155C061795 for ; Tue, 23 Jun 2020 09:18:02 -0700 (PDT) Received: by mail-qt1-x844.google.com with SMTP id o38so7141093qtf.6 for ; Tue, 23 Jun 2020 09:18:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lca.pw; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=SowX6r/DrbyMyIPo7bbmN6dCO4WOaZBpx8lXkNiYwRQ=; b=q5PXizvouRcfNA2W9Djqc/Z84L8gq+5xyPGe7rkAVVUXYo0XRUHAEI4yVuKgTSfgc9 RoYCG4UkEgCC9vVO+IYyrz7JORXpNUWmlzMkIni2Vfw9kxBsp2K+YgamdfZElTJuJZTW bbafUUDUQEhd51JcrBKUA9xHz8S+ZvnGFC/bx4j/zd/tIY4JT4nqabnA6jtnfro4Yx69 YevfddRvEKhgwHMyV0hQh6x3L1BXH3reCRQkEUr6zmcuCWOBikrw37RohUhW9xkrCqv7 nLIezDVzEaNlXEnFqrM5wDtOVHCyOyKL9todYtNNNUOatrNgrkdDPEXpNyPANNjAfT9T s5yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=SowX6r/DrbyMyIPo7bbmN6dCO4WOaZBpx8lXkNiYwRQ=; b=cDEXAX7a9F/KOnteo/6QtR1KTU+iTcJ4kWmHl009fG3Gj/SzYE2JizXbOT31UUH20m 583K7eQZnyLVbM2fGWU+Zj6DWJRqleY+HULBKxi/ogHK0zZN1dqXURBDmCScR5ACp9JK sdjbvqBX5hCtVffc8H3zg7HJDa/LSQmGRocTEV648SgSgxYFuiwDFJvGin9I76KAWqkg lo8h+Fa0NAHhrKtFUop0Tid2zed/ZokPBbt8LwpFBl7fiBRPd90DDbgL6IPqzUVBkU3P yFlBv46pUR8C7m8tXkU/3IB/lwciHCWN4JvVT6cuUqoXYivehdOSYa0yKxKcF01cQ7Cc +cRg== X-Gm-Message-State: AOAM533ncrikqeuQ9DgolaGagwWKcBPzy3hDKyNMrev4/7GM9KWXES5n vMm7lNjurM2/h39K9OTDaVxwSQ== X-Google-Smtp-Source: ABdhPJy2/MUFYS3DJu96XPGFYPvGEfp9/cvT3JQEnV1SM172gUGWrPZ+0/Xawft3xZIbg4oUrttvEg== X-Received: by 2002:ac8:4448:: with SMTP id m8mr22935603qtn.4.1592929081912; Tue, 23 Jun 2020 09:18:01 -0700 (PDT) Received: from lca.pw (pool-71-184-117-43.bstnma.fios.verizon.net. [71.184.117.43]) by smtp.gmail.com with ESMTPSA id d140sm966654qkc.22.2020.06.23.09.18.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Jun 2020 09:18:01 -0700 (PDT) Date: Tue, 23 Jun 2020 12:17:54 -0400 From: Qian Cai To: Daniel Vetter Cc: Intel Graphics Development , DRI Development , LKML , amd-gfx list , Thomas =?iso-8859-1?Q?Hellstr=F6m?= , Andrew Morton , Jason Gunthorpe , Linux MM , linux-rdma , Maarten Lankhorst , Christian =?iso-8859-1?Q?K=F6nig?= , Daniel Vetter , linux-xfs@vger.kernel.org Subject: Re: [PATCH] mm: Track mmu notifiers in fs_reclaim_acquire/release Message-ID: <20200623161754.GA1140@lca.pw> References: <20200604081224.863494-2-daniel.vetter@ffwll.ch> <20200610194101.1668038-1-daniel.vetter@ffwll.ch> <20200621174205.GB1398@lca.pw> <20200621200103.GV20149@phenom.ffwll.local> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20200621200103.GV20149@phenom.ffwll.local> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Jun 21, 2020 at 10:01:03PM +0200, Daniel Vetter wrote: > On Sun, Jun 21, 2020 at 08:07:08PM +0200, Daniel Vetter wrote: > > On Sun, Jun 21, 2020 at 7:42 PM Qian Cai wrote: > > > > > > On Wed, Jun 10, 2020 at 09:41:01PM +0200, Daniel Vetter wrote: > > > > fs_reclaim_acquire/release nicely catch recursion issues when > > > > allocating GFP_KERNEL memory against shrinkers (which gpu drivers tend > > > > to use to keep the excessive caches in check). For mmu notifier > > > > recursions we do have lockdep annotations since 23b68395c7c7 > > > > ("mm/mmu_notifiers: add a lockdep map for invalidate_range_start/end"). > > > > > > > > But these only fire if a path actually results in some pte > > > > invalidation - for most small allocations that's very rarely the case. > > > > The other trouble is that pte invalidation can happen any time when > > > > __GFP_RECLAIM is set. Which means only really GFP_ATOMIC is a safe > > > > choice, GFP_NOIO isn't good enough to avoid potential mmu notifier > > > > recursion. > > > > > > > > I was pondering whether we should just do the general annotation, but > > > > there's always the risk for false positives. Plus I'm assuming that > > > > the core fs and io code is a lot better reviewed and tested than > > > > random mmu notifier code in drivers. Hence why I decide to only > > > > annotate for that specific case. > > > > > > > > Furthermore even if we'd create a lockdep map for direct reclaim, we'd > > > > still need to explicit pull in the mmu notifier map - there's a lot > > > > more places that do pte invalidation than just direct reclaim, these > > > > two contexts arent the same. > > > > > > > > Note that the mmu notifiers needing their own independent lockdep map > > > > is also the reason we can't hold them from fs_reclaim_acquire to > > > > fs_reclaim_release - it would nest with the acquistion in the pte > > > > invalidation code, causing a lockdep splat. And we can't remove the > > > > annotations from pte invalidation and all the other places since > > > > they're called from many other places than page reclaim. Hence we can > > > > only do the equivalent of might_lock, but on the raw lockdep map. > > > > > > > > With this we can also remove the lockdep priming added in 66204f1d2d1b > > > > ("mm/mmu_notifiers: prime lockdep") since the new annotations are > > > > strictly more powerful. > > > > > > > > v2: Review from Thomas Hellstrom: > > > > - unbotch the fs_reclaim context check, I accidentally inverted it, > > > > but it didn't blow up because I inverted it immediately > > > > - fix compiling for !CONFIG_MMU_NOTIFIER > > > > > > > > Cc: Thomas Hellström (Intel) > > > > Cc: Andrew Morton > > > > Cc: Jason Gunthorpe > > > > Cc: linux-mm@kvack.org > > > > Cc: linux-rdma@vger.kernel.org > > > > Cc: Maarten Lankhorst > > > > Cc: Christian König > > > > Signed-off-by: Daniel Vetter > > > > > > Replying the right patch here... > > > > > > Reverting this commit [1] fixed the lockdep warning below while applying > > > some memory pressure. > > > > > > [1] linux-next cbf7c9d86d75 ("mm: track mmu notifiers in fs_reclaim_acquire/release") > > > > Hm, then I'm confused because > > - there's not mmut notifier lockdep map in the splat at a.. > > - the patch is supposed to not change anything for fs_reclaim (but the > > interim version got that wrong) > > - looking at the paths it's kmalloc vs kswapd, both places I totally > > expect fs_reflaim to be used. > > > > But you're claiming reverting this prevents the lockdep splat. If > > that's right, then my reasoning above is broken somewhere. Someone > > less blind than me having an idea? > > > > Aside this is the first email I've typed, until I realized the first > > report was against the broken patch and that looked like a much more > > reasonable explanation (but didn't quite match up with the code > > paths). > > Below diff should undo the functional change in my patch. Can you pls test > whether the lockdep splat is really gone with that? Might need a lot of > testing and memory pressure to be sure, since all these reclaim paths > aren't very deterministic. No, this patch does not help but reverting the whole patch still fixed the splat. > -Daniel > > --- > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index d807587c9ae6..27ea763c6155 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4191,11 +4191,6 @@ void fs_reclaim_acquire(gfp_t gfp_mask) > if (gfp_mask & __GFP_FS) > __fs_reclaim_acquire(); > > -#ifdef CONFIG_MMU_NOTIFIER > - lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); > - lock_map_release(&__mmu_notifier_invalidate_range_start_map); > -#endif > - > } > } > EXPORT_SYMBOL_GPL(fs_reclaim_acquire); > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch