Linux-mm Archive on lore.kernel.org
 help / color / Atom feed
* [RFC PATCH 0/3] Introduce a bulk order-0 page allocator for sunrpc
@ 2021-02-24 10:26 Mel Gorman
  2021-03-01 13:29 ` [PATCH RFC V2 net-next 0/2] Use bulk order-0 page allocator API for page_pool Jesper Dangaard Brouer
  0 siblings, 1 reply; 2+ messages in thread
From: Mel Gorman @ 2021-02-24 10:26 UTC (permalink / raw)
  To: Chuck Lever, Jesper Dangaard Brouer
  Cc: LKML, Linux-Net, Linux-MM, Linux-NFS, Mel Gorman

This is a prototype series that introduces a bulk order-0 page allocator
with sunrpc being the first user. The implementation is not particularly
efficient and the intention is to iron out what the semantics of the API
should be. That said, sunrpc was reported to have reduced allocation
latency when refilling a pool.

As a side-note, while the implementation could be more efficient, it
would require fairly deep surgery in numerous places. The lock scope would
need to be significantly reduced, particularly as vmstat, per-cpu and the
buddy allocator have different locking protocol that overal -- e.g. all
partially depend on irqs being disabled at various points. Secondly,
the core of the allocator deals with single pages where as both the bulk
allocator and per-cpu allocator operate in batches. All of that has to
be reconciled with all the existing users and their constraints (memory
offline, CMA and cpusets being the trickiest).

In terms of semantics required by new users, my preference is that a pair
of patches be applied -- the first which adds the required semantic to
the bulk allocator and the second which adds the new user.

Patch 1 of this series is a cleanup to sunrpc, it could be merged
	separately but is included here for convenience.

Patch 2 is the prototype bulk allocator

Patch 3 is the sunrpc user. Chuck also has a patch which further caches
	pages but is not included in this series. It's not directly
	related to the bulk allocator and as it caches pages, it might
	have other concerns (e.g. does it need a shrinker?)

This has only been lightly tested on a low-end NFS server. It did not break
but would benefit from an evaluation to see how much, if any, the headline
performance changes. The biggest concern is that a light test case showed
that there are a *lot* of bulk requests for 1 page which gets delegated to
the normal allocator.  The same criteria should apply to any other users.

 include/linux/gfp.h   |  13 +++++
 mm/page_alloc.c       | 113 +++++++++++++++++++++++++++++++++++++++++-
 net/sunrpc/svc_xprt.c |  47 ++++++++++++------
 3 files changed, 157 insertions(+), 16 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, back to index

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-24 10:26 [RFC PATCH 0/3] Introduce a bulk order-0 page allocator for sunrpc Mel Gorman
2021-03-01 13:29 ` [PATCH RFC V2 net-next 0/2] Use bulk order-0 page allocator API for page_pool Jesper Dangaard Brouer
2021-03-01 13:29   ` [PATCH RFC V2 net-next 1/2] net: page_pool: refactor dma_map into own function page_pool_dma_map Jesper Dangaard Brouer

Linux-mm Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-mm/0 linux-mm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-mm linux-mm/ https://lore.kernel.org/linux-mm \
		linux-mm@kvack.org
	public-inbox-index linux-mm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kvack.linux-mm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git