All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Chuck Lever <chuck.lever@oracle.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-Net <netdev@vger.kernel.org>, Linux-MM <linux-mm@kvack.org>,
	Linux-NFS <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH 0/5] Introduce a bulk order-0 page allocator with two in-tree users
Date: Fri, 12 Mar 2021 15:10:25 +0000	[thread overview]
Message-ID: <20210312151025.GB2577561@casper.infradead.org> (raw)
In-Reply-To: <20210311084827.GS3697@techsingularity.net>

On Thu, Mar 11, 2021 at 08:48:27AM +0000, Mel Gorman wrote:
> I don't have that information unfortunately. It's a chicken and egg
> problem because without the API, there is no point creating new users.
> For example, fault around or readahead could potentially batch pages
> but whether it is actually noticable when page zeroing has to happen
> is a completely different story. It's a similar story for SLUB, we know
> lower order allocations hurt some microbenchmarks like hackbench-sockets
> but have not quantified what happens if SLUB batch allocates pages when
> high-order allocations fail.

I'm planning on reducing overhead in the readahead path by allocating
higher-order pages rather than by allocating a batch of order-0 pages.
With the old ->readpages interface, it would have made sense to allocate a
batch of pages, but with the new ->readahead interface, we put the pages
into the page cache for the filesystem, so it doesn't make as much sense
any more.

Right now, measuring performance in the readahead path is hard because
we end up contending against kswapd that's trying to evict all the clean
pages that we earlier readahead into this same file.  Could avoid that by
having N files, each about half the size of memory, but then we restart
the readahead algorithm for each file ...

I feel like the real solution for that is to do a GFP_NOWAIT allocation,
then try to evict earlier pages for the same file we're working on so
that kswapd doesn't get woken up if we're just streaming a read through
a gargantuan file.  But I should probably talk to Johannes first.

  reply	other threads:[~2021-03-12 15:11 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-10 10:46 [PATCH 0/5] Introduce a bulk order-0 page allocator with two in-tree users Mel Gorman
2021-03-10 10:46 ` [PATCH 1/5] SUNRPC: Set rq_page_end differently Mel Gorman
2021-03-10 10:46 ` [PATCH 2/5] mm/page_alloc: Add a bulk page allocator Mel Gorman
2021-03-10 23:46   ` Andrew Morton
2021-03-11  8:42     ` Mel Gorman
2021-03-12 11:46       ` Jesper Dangaard Brouer
2021-03-12 13:44         ` Mel Gorman
2021-03-12 14:58         ` Matthew Wilcox
2021-03-12 16:03           ` Mel Gorman
2021-03-12 21:08             ` Matthew Wilcox
2021-03-13 13:16               ` Mel Gorman
2021-03-13 16:39                 ` Matthew Wilcox
2021-03-13 16:56                   ` Chuck Lever III
2021-03-13 19:33                     ` Matthew Wilcox
2021-03-14 12:52                       ` Mel Gorman
2021-03-14 15:22                         ` Chuck Lever III
2021-03-15 10:42                           ` Mel Gorman
2021-03-15 16:42                             ` Jesper Dangaard Brouer
2021-03-19 17:10                         ` Jesper Dangaard Brouer
2021-03-12 12:43   ` Matthew Wilcox
2021-03-12 14:15     ` Mel Gorman
2021-03-10 10:46 ` [PATCH 3/5] SUNRPC: Refresh rq_pages using " Mel Gorman
2021-03-10 10:46 ` [PATCH 4/5] net: page_pool: refactor dma_map into own function page_pool_dma_map Mel Gorman
2021-03-10 10:46 ` [PATCH 5/5] net: page_pool: use alloc_pages_bulk in refill code path Mel Gorman
2021-03-10 23:47 ` [PATCH 0/5] Introduce a bulk order-0 page allocator with two in-tree users Andrew Morton
2021-03-11  8:48   ` Mel Gorman
2021-03-12 15:10     ` Matthew Wilcox [this message]
  -- strict thread matches above, loose matches on Subject: below --
2021-03-01 16:11 Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210312151025.GB2577561@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=chuck.lever@oracle.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.