From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5092C4167B for ; Thu, 3 Dec 2020 15:16:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A461D206A4 for ; Thu, 3 Dec 2020 15:16:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A461D206A4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0ED876B0070; Thu, 3 Dec 2020 10:16:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C5168D0002; Thu, 3 Dec 2020 10:16:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F1CF28D0001; Thu, 3 Dec 2020 10:16:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id DD2DC6B0070 for ; Thu, 3 Dec 2020 10:16:19 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A063C181AEF3C for ; Thu, 3 Dec 2020 15:16:19 +0000 (UTC) X-FDA: 77552322078.11.error29_5615474273bc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 7784C180F8B86 for ; Thu, 3 Dec 2020 15:16:19 +0000 (UTC) X-HE-Tag: error29_5615474273bc X-Filterd-Recvd-Size: 5898 Received: from mail-ed1-f65.google.com (mail-ed1-f65.google.com [209.85.208.65]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Dec 2020 15:16:18 +0000 (UTC) Received: by mail-ed1-f65.google.com with SMTP id d18so2453492edt.7 for ; Thu, 03 Dec 2020 07:16:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=5CZTZvxfaWDdWMFYGm2hND2LpKwDsHuS9rmNu18OIv4=; b=UtM5hsRalnp74tSB5DseBJYzuu6zRCp5gk6C43T8HHM5b7lYUKHv5m+ajdCcBtz4Yk f2EZA76SNOQrDhRAcbtmsgGPb3H9x5UCiS0h0OQGsu+gpvmPmfspIoXCFn3XEckXGHr/ g1o9BllsAicf+nJnyaVXCPJfQ6pEMl2mH6IMuP5ADAoFuU857Q89P5GXGYp4g8j67x8o YpUOpqR4/kpRemrx9hLFBornNXP7VCBeO9pVJ4EgcIliVaO/uoGgb5o25xeB3mL6J8lw qMM4m+R+0ncqtpR62mEGAIks/bzFn5Edlz2K7FIaKtK+pJ5PBFtfO9jeL9y2wHQDkDlG xg1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=5CZTZvxfaWDdWMFYGm2hND2LpKwDsHuS9rmNu18OIv4=; b=CsnOKRPMGeT3m5zWfjsoOXwHA+fSsxZ88o9WBQa2CqnD6aaHTeKJDuh42Cdmz91Qhg WbavHYgfinOXWOtZC7KR+VIOJ1i1CJ+MasSzvlMM9H0ey+tiXZM50hgOxI2vbWdmbo3d krQfizefC/EUiJtyyFFV7GAiJtYvBXMpkCR9z/Uxyxeb6FzZzVFpWKhAm+AksWxhKZ88 gCZVmeINtdFbNW0L+SRjXmGoaCzCdDBbARsTjxcFyaADwqSzmQnR6k6ONOcspvD+m5M7 MGIVM2U/zT4oDOsgvioY8Mb70f3M0TxZawztd4R2a8CwA35n9DF7/R0N0l0zO3GRkfbj 7A+g== X-Gm-Message-State: AOAM530fMvPdG8Fc/dBgJgYPyN3X3NjGi+CXT1dmV+zcMIOXJ4IkUOQY s9JJAFQWpGIAO5xej/OJciYPae1BA9Vwb+8TuSAsyw== X-Google-Smtp-Source: ABdhPJw6odafP1GdIbk0bpCzcdTdqrTkgnN1/7o8gJXJ4rZHiQFVpk3Yd14/5h6npa6yf2STfKPxmA1RtvcW4TlXT4w= X-Received: by 2002:a50:e00f:: with SMTP id e15mr3443366edl.210.1607008577734; Thu, 03 Dec 2020 07:16:17 -0800 (PST) MIME-Version: 1.0 References: <20201202052330.474592-1-pasha.tatashin@soleen.com> <20201202052330.474592-6-pasha.tatashin@soleen.com> <20201203091703.GA17338@dhcp22.suse.cz> In-Reply-To: <20201203091703.GA17338@dhcp22.suse.cz> From: Pavel Tatashin Date: Thu, 3 Dec 2020 10:15:41 -0500 Message-ID: Subject: Re: [PATCH 5/6] mm: honor PF_MEMALLOC_NOMOVABLE for all allocations To: Michal Hocko Cc: LKML , linux-mm , Andrew Morton , Vlastimil Babka , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , mike.kravetz@oracle.com, Steven Rostedt , Ingo Molnar , Jason Gunthorpe , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes , John Hubbard Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Dec 3, 2020 at 4:17 AM Michal Hocko wrote: > > On Wed 02-12-20 00:23:29, Pavel Tatashin wrote: > [...] > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 611799c72da5..7a6d86d0bc5f 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -3766,20 +3766,25 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) > > return alloc_flags; > > } > > > > -static inline unsigned int current_alloc_flags(gfp_t gfp_mask, > > - unsigned int alloc_flags) > > +static inline unsigned int cma_alloc_flags(gfp_t gfp_mask, > > + unsigned int alloc_flags) > > { > > #ifdef CONFIG_CMA > > - unsigned int pflags = current->flags; > > - > > - if (!(pflags & PF_MEMALLOC_NOMOVABLE) && > > - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) > > + if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) > > alloc_flags |= ALLOC_CMA; > > - > > #endif > > return alloc_flags; > > } > > > > +static inline gfp_t current_gfp_checkmovable(gfp_t gfp_mask) > > +{ > > + unsigned int pflags = current->flags; > > + > > + if ((pflags & PF_MEMALLOC_NOMOVABLE)) > > + return gfp_mask & ~__GFP_MOVABLE; > > + return gfp_mask; > > +} > > + > > It sucks that we have to control both ALLOC and gfp flags. But wouldn't > it be simpler and more straightforward to keep current_alloc_flags as is > (module PF rename) and hook the gfp mask evaluation into current_gfp_context > and move it up before the first allocation attempt? We could do that, but perhaps as a separate patch? I am worried about hidden implication of adding extra scope (GFP_NOIO|GFP_NOFS) to the fast path. Also, current_gfp_context() is used elsewhere, and in some places removing __GFP_MOVABLE from gfp_mask means that we will need to also change other things. For example [1], in try_to_free_pages() we call current_gfp_context(gfp_mask) which can reduce the maximum zone idx, yet we simply set it to: reclaim_idx = gfp_zone(gfp_mask), not to the newly determined gfp_mask. [1] https://soleen.com/source/xref/linux/mm/vmscan.c?r=2da9f630#3239 All scope flags > should be applicable to the hot path as well. It would add few cycles to > there but the question is whether that would be noticeable over just > handling PF_MEMALLOC_NOMOVABLE on its own. The cache line would be > pulled in anyway. Let's try it in a separate patch? I will add it in the next version of this series. Thank you, Pasha