From: Feng Tang <feng.tang@intel.com>
To: Michal Hocko <mhocko@suse.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Andrea Arcangeli <aarcange@redhat.com>,
David Rientjes <rientjes@google.com>,
Mel Gorman <mgorman@techsingularity.net>,
Mike Kravetz <mike.kravetz@oracle.com>,
Randy Dunlap <rdunlap@infradead.org>,
Vlastimil Babka <vbabka@suse.cz>,
"Hansen, Dave" <dave.hansen@intel.com>,
"Widawsky, Ben" <ben.widawsky@intel.com>,
Andi leen <ak@linux.intel.com>,
"Williams, Dan J" <dan.j.williams@intel.com>
Subject: Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
Date: Wed, 3 Mar 2021 20:18:33 +0800 [thread overview]
Message-ID: <20210303121833.GB16736@shbuild999.sh.intel.com> (raw)
In-Reply-To: <20210303120717.GA16736@shbuild999.sh.intel.com>
On Wed, Mar 03, 2021 at 08:07:17PM +0800, Tang, Feng wrote:
> Hi Michal,
>
> On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wrote:
> > On Wed 03-03-21 18:20:58, Feng Tang wrote:
> > > When doing broader test, we noticed allocation slowness in one test
> > > case that malloc memory with size which is slightly bigger than free
> > > memory of targeted nodes, but much less then the total free memory
> > > of system.
> > >
> > > The reason is the code enters the slowpath of __alloc_pages_nodemask(),
> > > which takes quite some time. As alloc_pages_policy() will give it a 2nd
> > > try with NULL nodemask, so there is no need to enter the slowpath for
> > > the first try. Add a new gfp bit to skip the slowpath, so that user cases
> > > like this can leverage.
> > >
> > > With it, the malloc in such case is much accelerated as it never enters
> > > the slowpath.
> > >
> > > Adding a new gfp_mask bit is generally not liked, and another idea is to
> > > add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask'
> > > and 'fallback-nmask', and they will be tried in turn if not NULL, with
> > > it we can call __alloc_pages_nodemask() only once.
> >
> > Yes, it is very much disliked. Is there any reason why you cannot use
> > GFP_NOWAIT for that purpose?
>
> I did try that at the first place, but it didn't obviously change the slowness.
> I assumed the direct claim was still involved as GFP_NOWAIT only impact kswapd
> reclaim.
One thing I tried which can fix the slowness is:
+ gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
which explicitly clears the 2 kinds of reclaim. And I thought it's too
hacky and didn't mention it in the commit log.
Thanks,
Feng
next prev parent reply other threads:[~2021-03-03 12:18 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
2021-03-03 10:20 ` [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL Feng Tang
2021-03-10 6:27 ` Feng Tang
2021-03-03 10:20 ` [PATCH v3 02/14] mm/mempolicy: convert single preferred_node to full nodemask Feng Tang
2021-03-03 10:20 ` [PATCH v3 03/14] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang
2021-03-03 10:20 ` [PATCH v3 04/14] mm/mempolicy: allow preferred code to take a nodemask Feng Tang
2021-03-03 10:20 ` [PATCH v3 05/14] mm/mempolicy: refactor rebind code for PREFERRED_MANY Feng Tang
2021-03-03 10:20 ` [PATCH v3 06/14] mm/mempolicy: kill v.preferred_nodes Feng Tang
2021-03-03 10:20 ` [PATCH v3 07/14] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Feng Tang
2021-03-03 10:20 ` [PATCH v3 08/14] mm/mempolicy: Create a page allocator for policy Feng Tang
2021-03-03 10:20 ` [PATCH v3 09/14] mm/mempolicy: Thread allocation for many preferred Feng Tang
2021-03-03 10:20 ` [PATCH v3 10/14] mm/mempolicy: VMA " Feng Tang
2021-03-03 10:20 ` [PATCH v3 11/14] mm/mempolicy: huge-page " Feng Tang
2021-03-03 10:20 ` [PATCH v3 12/14] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang
2021-03-03 10:20 ` [PATCH v3 13/14] mem/mempolicy: unify mpol_new_preferred() and mpol_new_preferred_many() Feng Tang
2021-03-03 10:20 ` [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit Feng Tang
2021-03-03 11:39 ` Michal Hocko
2021-03-03 12:07 ` Feng Tang
2021-03-03 12:18 ` Feng Tang [this message]
2021-03-03 12:32 ` Michal Hocko
2021-03-03 13:18 ` Feng Tang
2021-03-03 13:46 ` Feng Tang
2021-03-03 13:59 ` Michal Hocko
2021-03-03 16:31 ` Ben Widawsky
2021-03-03 16:48 ` Dave Hansen
2021-03-10 5:19 ` Feng Tang
2021-03-10 9:44 ` Michal Hocko
2021-03-10 11:49 ` Feng Tang
2021-03-03 17:14 ` Michal Hocko
2021-03-03 17:22 ` Ben Widawsky
2021-03-04 8:14 ` Feng Tang
2021-03-04 12:59 ` Michal Hocko
2021-03-05 2:21 ` Feng Tang
2021-03-04 12:57 ` Michal Hocko
2021-03-03 13:53 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210303121833.GB16736@shbuild999.sh.intel.com \
--to=feng.tang@intel.com \
--cc=aarcange@redhat.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=ben.widawsky@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).