All of lore.kernel.org
 help / color / mirror / Atom feed
From: Uladzislau Rezki <urezki@gmail.com>
To: Mel Gorman <mgorman@suse.de>, Christoph Hellwig <hch@infradead.org>
Cc: Uladzislau Rezki <urezki@gmail.com>,
	Christoph Hellwig <hch@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Hillf Danton <hdanton@sina.com>, Michal Hocko <mhocko@suse.com>,
	Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>,
	Steven Rostedt <rostedt@goodmis.org>
Subject: Re: [PATCH 2/3] mm/vmalloc: Switch to bulk allocator in __vmalloc_area_node()
Date: Wed, 19 May 2021 21:52:14 +0200	[thread overview]
Message-ID: <20210519195214.GA2343@pc638.lan> (raw)
In-Reply-To: <20210519155630.GD3672@suse.de>

> On Wed, May 19, 2021 at 04:39:00PM +0200, Uladzislau Rezki wrote:
> > > > +	/*
> > > > +	 * If not enough pages were obtained to accomplish an
> > > > +	 * allocation request, free them via __vfree() if any.
> > > > +	 */
> > > > +	if (area->nr_pages != nr_small_pages) {
> > > > +		warn_alloc(gfp_mask, NULL,
> > > > +			"vmalloc size %lu allocation failure: "
> > > > +			"page order %u allocation failed",
> > > > +			area->nr_pages * PAGE_SIZE, page_order);
> > > > +		goto fail;
> > > > +	}
> > > 
> > > From reading __alloc_pages_bulk not allocating all pages is something
> > > that cn happen fairly easily.  Shouldn't we try to allocate the missing
> > > pages manually and/ore retry here?
> > > 
> >
> > It is a good point. The bulk-allocator, as i see, only tries to access
> > to pcp-list and falls-back to a single allocator once it fails, so the
> > array may not be fully populated.
> > 
> 
> Partially correct. It does allocate via the pcp-list but the pcp-list will
> be refilled if it's empty so if the bulk allocator returns fewer pages
> than requested, it may be due to hitting watermarks or the local zone is
> depleted. It does not take any special action to correct the situation
> or stall e.g.  wake kswapd, enter direct reclaim, allocate from a remote
> node etc.
> 
> If no pages were allocated, it'll try allocate at least one page via a
> single allocation request in case the bulk failure would push the zone
> over the watermark but 1 page does not. That path as a side-effect would
> also wake kswapd.
> 
OK. A single page allocator can enter a slow path i mean direct reclaim,
etc to adjust watermarks.

> > In that case probably it makes sense to manually populate it using
> > single page allocator.
> > 
> > Mel, could you please also comment on it?
> > 
> 
> It is by design because it's unknown if callers can recover or if so,
> how they want to recover and the primary intent behind the bulk allocator
> was speed. In the case of network, it only wants some pages quickly so as
> long as it gets 1, it makes progress. For the sunrpc user, it's willing
> to wait and retry. For vmalloc, I'm unsure what a suitable recovery path
> should be as I do not have a good handle on workloads that are sensitive
> to vmalloc performance. The obvious option would be to loop and allocate
> single pages with alloc_pages_node understanding that the additional
> pages may take longer to allocate.
> 
I got it. At least we should fall-back for a single allocator, that is how
we used to allocate before(now it is for high-order pages). If it also fails
to obtain a page we are done.

Basically a single-page allocator is more permissive so it is a higher
chance to success. Therefore a fallback to it makes sense.

Thanks.

--
Vlad Rezki

  reply	other threads:[~2021-05-19 19:52 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-16 20:20 [PATCH 0/3] vmalloc() vs bulk allocator v2 Uladzislau Rezki (Sony)
2021-05-16 20:20 ` [PATCH 1/3] mm/page_alloc: Add an alloc_pages_bulk_array_node() helper Uladzislau Rezki (Sony)
2021-05-16 20:20 ` [PATCH 2/3] mm/vmalloc: Switch to bulk allocator in __vmalloc_area_node() Uladzislau Rezki (Sony)
2021-05-17  8:24   ` Mel Gorman
2021-05-17 11:51     ` Uladzislau Rezki
2021-05-17 11:51       ` Uladzislau Rezki
2021-05-19 13:44   ` Christoph Hellwig
2021-05-19 14:39     ` Uladzislau Rezki
2021-05-19 15:56       ` Mel Gorman
2021-05-19 19:52         ` Uladzislau Rezki [this message]
2021-05-19 21:07           ` Uladzislau Rezki
2021-06-28 23:00             ` Andrew Morton
2021-05-16 20:20 ` [PATCH 3/3] mm/vmalloc: Print a warning message first on failure Uladzislau Rezki (Sony)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210519195214.GA2343@pc638.lan \
    --to=urezki@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=hch@infradead.org \
    --cc=hdanton@sina.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=npiggin@gmail.com \
    --cc=oleksiy.avramchenko@sonymobile.com \
    --cc=rostedt@goodmis.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.