All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
To: LKML <linux-kernel@vger.kernel.org>, RCU <rcu@vger.kernel.org>,
	linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: "Theodore Y . Ts'o" <tytso@mit.edu>,
	Joel Fernandes <joel@joelfernandes.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Uladzislau Rezki <urezki@gmail.com>,
	Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Subject: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag
Date: Sun,  9 Aug 2020 22:43:53 +0200	[thread overview]
Message-ID: <20200809204354.20137-2-urezki@gmail.com> (raw)
In-Reply-To: <20200809204354.20137-1-urezki@gmail.com>

Some background and kfree_rcu()
===============================
The pointers to be freed are stored in the per-cpu array to improve
performance, to enable an easier-to-use API, to accommodate vmalloc
memmory and to support a single argument of the kfree_rcu() when only
a pointer is passed. More details are below.

In order to maintain such per-CPU arrays there is a need in dynamic
allocation when a current array is fully populated and a new block is
required. See below the example:

 0 1 2 3      0 1 2 3
|p|p|p|p| -> |p|p|p|p| -> NULL

there are two pointer-blocks, each one can store 4 addresses
which will be freed after a grace period is passed. In reality
we store PAGE_SIZE / sizeof(void *). So to maintain such blocks
a single page is obtain via the page allocator:

    bnode = (struct kvfree_rcu_bulk_data *)
        __get_free_page(GFP_NOWAIT | __GFP_NOWARN);

after that it is attached to the "head" and its "next" pointer is
set to previous "head", so the list of blocks can be maintained and
grow dynamically until it gets drained by the reclaiming thread.

Please note. There is always a fallback if an allocation fails. In the
single argument, this is a call to synchronize_rcu() and for the two
arguments case this is to use rcu_head structure embedded in the object
being free, and then paying cache-miss penalty, also invoke the kfree()
per object instead of kfree_bulk() for groups of objects.

Why we maintain arrays/blocks instead of linking objects by the regular
"struct rcu_head" technique. See below a few but main reasons:

a) A memory can be reclaimed by invoking of the kfree_bulk()
   interface that requires passing an array and number of
   entries in it. That reduces the per-object overhead caused
   by calling kfree() per-object. This reduces the reclamation
   time.

b) Improves locality and reduces the number of cache-misses, due to
   "pointer chasing" between objects, which can be far spread between
   each other.

c) Support a "single argument" in the kvfree_rcu()
   void *ptr = kvmalloc(some_bytes, GFP_KERNEL);
   if (ptr)
        kvfree_rcu(ptr);

   We need it when an "rcu_head" is not embed into a stucture but an
   object must be freed after a grace period. Therefore for the single
   argument, such objects cannot be queued on a linked list.

   So nowadays, since we do not have a single argument but we see the
   demand in it, to workaround it people just do a simple not efficient
   sequence:
   <snip>
       synchronize_rcu(); /* Can be long and blocks a current context */
       kfree(p);
   <snip>

   More details is here: https://lkml.org/lkml/2020/4/28/1626

d) To distinguish vmalloc pointers between SLAB ones. It becomes possible
   to invoke the right freeing API for the right kind of pointer, kfree_bulk()
   or TBD: vmalloc_bulk().

Also, please have a look here: https://lkml.org/lkml/2020/7/30/1166

Limitations and concerns (Main part)
====================================
The current memmory-allocation interface presents to following
difficulties that this patch is designed to overcome:

a) If built with CONFIG_PROVE_RAW_LOCK_NESTING, the lockdep will
   complain about violation("BUG: Invalid wait context") of the
   nesting rules. It does the raw_spinlock vs. spinlock nesting
   checks, i.e. it is not legal to acquire a spinlock_t while
   holding a raw_spinlock_t.

   Internally the kfree_rcu() uses raw_spinlock_t(in rcu-dev branch)
   whereas the "page allocator" internally deals with spinlock_t to
   access to its zones. The code also can be broken from higher level
   of view:
   <snip>
       raw_spin_lock(&some_lock);
       kfree_rcu(some_pointer, some_field_offset);
   <snip>

b) If built with CONFIG_PREEMPT_RT. Please note, in that case spinlock_t
   is converted into sleepable variant. Invoking the page allocator from
   atomic contexts leads to "BUG: scheduling while atomic".

Proposal
========
1) Make GFP_* that ensures that the allocator returns NULL rather
than acquire its own spinlock_t. Having such flag will address a and b
limitations described above. It will also make the kfree_rcu() code
common for RT and regular kernel, more clean, less handling corner
cases and reduce the code size.

Description:
The page allocator has two phases, fast path and slow one. We are interested
in fast path and order-0 allocations. In its turn it is divided also into two
phases: lock-less and not:

a) As a first step the page allocator tries to obtain a page from the
   per-cpu-list, so each CPU has its own one. That is why this step is
   lock-less and fast. Basically it disables irqs on current CPU in order
   to access to per-cpu data and remove a first element from the pcp-list.
   An element/page is returned to an user.

b) If there is no any available page in per-cpu-list, the second step is
   involved. It removes a specified number of elements from the buddy allocator
   transferring them to the "supplied-list/per-cpu-list" described in [1].

A number of pre-fetched elements can be controlled via sysfs attribute.
Please see the /proc/sys/vm/percpu_pagelist_fraction. This step is not
lock-less. It uses spinlock_t for accessing to the buddy zone. This
step is fully covered by the rmqueue_bulk() function.

Summarizing. The __GFP_NO_LOCKS covers only [1] and can not do step [2],
due to the fact that [2] acquires spinlock_t. It implies that it is super
fast, but a higher rate of fails is also expected. Having such flag will
address (a) and (b) limitations described above.

Usage: __get_free_page(__GFP_NO_LOCKS);

Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 include/linux/gfp.h            | 11 +++++++++--
 include/trace/events/mmflags.h |  1 +
 mm/page_alloc.c                | 31 +++++++++++++++++++++++++------
 tools/perf/builtin-kmem.c      |  1 +
 4 files changed, 36 insertions(+), 8 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 67a0774e080b..c6f11481c42a 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -39,8 +39,9 @@ struct vm_area_struct;
 #define ___GFP_HARDWALL		0x100000u
 #define ___GFP_THISNODE		0x200000u
 #define ___GFP_ACCOUNT		0x400000u
+#define ___GFP_NO_LOCKS		0x800000u
 #ifdef CONFIG_LOCKDEP
-#define ___GFP_NOLOCKDEP	0x800000u
+#define ___GFP_NOLOCKDEP	0x1000000u
 #else
 #define ___GFP_NOLOCKDEP	0
 #endif
@@ -215,16 +216,22 @@ struct vm_area_struct;
  * %__GFP_COMP address compound page metadata.
  *
  * %__GFP_ZERO returns a zeroed page on success.
+ *
+ * %__GFP_NO_LOCKS order-0 allocation without sleepable-locks.
+ * It obtains a page from the per-cpu-list and considered as
+ * lock-less. No other actions are performed, thus it returns
+ * NULL if per-cpu-list is empty.
  */
 #define __GFP_NOWARN	((__force gfp_t)___GFP_NOWARN)
 #define __GFP_COMP	((__force gfp_t)___GFP_COMP)
 #define __GFP_ZERO	((__force gfp_t)___GFP_ZERO)
+#define __GFP_NO_LOCKS	((__force gfp_t)___GFP_NO_LOCKS)
 
 /* Disable lockdep for GFP context tracking */
 #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)
 
 /* Room for N __GFP_FOO bits */
-#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP))
+#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP))
 #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
 
 /**
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index 939092dbcb8b..653c56c478ad 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -45,6 +45,7 @@
 	{(unsigned long)__GFP_RECLAIMABLE,	"__GFP_RECLAIMABLE"},	\
 	{(unsigned long)__GFP_MOVABLE,		"__GFP_MOVABLE"},	\
 	{(unsigned long)__GFP_ACCOUNT,		"__GFP_ACCOUNT"},	\
+	{(unsigned long)__GFP_NO_LOCKS,		"__GFP_NO_LOCKS"},	\
 	{(unsigned long)__GFP_WRITE,		"__GFP_WRITE"},		\
 	{(unsigned long)__GFP_RECLAIM,		"__GFP_RECLAIM"},	\
 	{(unsigned long)__GFP_DIRECT_RECLAIM,	"__GFP_DIRECT_RECLAIM"},\
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e4896e674594..8bf1e3a9a1c3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3305,7 +3305,8 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
 }
 
 /* Remove page from the per-cpu list, caller must protect the list */
-static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
+static struct page *__rmqueue_pcplist(struct zone *zone, gfp_t gfp_flags,
+			int migratetype,
 			unsigned int alloc_flags,
 			struct per_cpu_pages *pcp,
 			struct list_head *list)
@@ -3314,7 +3315,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
 
 	do {
 		if (list_empty(list)) {
-			pcp->count += rmqueue_bulk(zone, 0,
+			if (!(gfp_flags & __GFP_NO_LOCKS))
+				pcp->count += rmqueue_bulk(zone, 0,
 					pcp->batch, list,
 					migratetype, alloc_flags);
 			if (unlikely(list_empty(list)))
@@ -3341,8 +3343,20 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
 
 	local_irq_save(flags);
 	pcp = &this_cpu_ptr(zone->pageset)->pcp;
-	list = &pcp->lists[migratetype];
-	page = __rmqueue_pcplist(zone,  migratetype, alloc_flags, pcp, list);
+
+	if (!(gfp_flags & __GFP_NO_LOCKS)) {
+		list = &pcp->lists[migratetype];
+		page = __rmqueue_pcplist(zone, gfp_flags, migratetype, alloc_flags, pcp, list);
+	} else {
+		/* Iterate over all migrate types of the pcp-lists. */
+		for (migratetype = 0; migratetype < MIGRATE_PCPTYPES; migratetype++) {
+			list = &pcp->lists[migratetype];
+			page = __rmqueue_pcplist(zone, gfp_flags, migratetype, alloc_flags, pcp, list);
+			if (page)
+				break;
+		}
+	}
+
 	if (page) {
 		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1);
 		zone_statistics(preferred_zone, zone);
@@ -3790,7 +3804,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 			 * grow this zone if it contains deferred pages.
 			 */
 			if (static_branch_unlikely(&deferred_pages)) {
-				if (_deferred_grow_zone(zone, order))
+				if (!(gfp_mask & __GFP_NO_LOCKS) &&
+						_deferred_grow_zone(zone, order))
 					goto try_this_zone;
 			}
 #endif
@@ -3835,7 +3850,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 				reserve_highatomic_pageblock(page, zone, order);
 
 			return page;
-		} else {
+		} else if (!(gfp_mask & __GFP_NO_LOCKS)) {
 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
 			/* Try again if zone has deferred pages */
 			if (static_branch_unlikely(&deferred_pages)) {
@@ -4880,6 +4895,10 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
 	if (likely(page))
 		goto out;
 
+	/* Bypass slow path if __GFP_NO_LOCKS. */
+	if ((gfp_mask & __GFP_NO_LOCKS))
+		goto out;
+
 	/*
 	 * Apply scoped allocation constraints. This is mainly about GFP_NOFS
 	 * resp. GFP_NOIO which has to be inherited for all allocation requests
diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
index 38a5ab683ebc..662e1d9a0e99 100644
--- a/tools/perf/builtin-kmem.c
+++ b/tools/perf/builtin-kmem.c
@@ -656,6 +656,7 @@ static const struct {
 	{ "__GFP_RECLAIMABLE",		"RC" },
 	{ "__GFP_MOVABLE",		"M" },
 	{ "__GFP_ACCOUNT",		"AC" },
+	{ "__GFP_NO_LOCKS",		"NL" },
 	{ "__GFP_WRITE",		"WR" },
 	{ "__GFP_RECLAIM",		"R" },
 	{ "__GFP_DIRECT_RECLAIM",	"DR" },
-- 
2.20.1


  reply	other threads:[~2020-08-09 20:44 UTC|newest]

Thread overview: 111+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-09 20:43 [RFC-PATCH 0/2] __GFP_NO_LOCKS Uladzislau Rezki (Sony)
2020-08-09 20:43 ` Uladzislau Rezki (Sony) [this message]
2020-08-10 12:31   ` [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag Michal Hocko
2020-08-10 16:07     ` Uladzislau Rezki
2020-08-10 19:25       ` Michal Hocko
2020-08-11  8:19         ` Michal Hocko
2020-08-11  9:37           ` Uladzislau Rezki
2020-08-11  9:42             ` Uladzislau Rezki
2020-08-11 10:28               ` Michal Hocko
2020-08-11 10:45                 ` Uladzislau Rezki
2020-08-11 10:26             ` Michal Hocko
2020-08-11 11:33               ` Uladzislau Rezki
2020-08-11  9:18         ` Uladzislau Rezki
2020-08-11 10:21           ` Michal Hocko
2020-08-11 11:10             ` Uladzislau Rezki
2020-08-11 14:44         ` Thomas Gleixner
2020-08-11 15:22           ` Thomas Gleixner
2020-08-12 11:38             ` Thomas Gleixner
2020-08-12 12:01               ` Uladzislau Rezki
2020-08-13  7:18               ` Michal Hocko
2020-08-11 15:33           ` Paul E. McKenney
2020-08-11 15:43             ` Thomas Gleixner
2020-08-11 15:56               ` Sebastian Andrzej Siewior
2020-08-11 16:02               ` Paul E. McKenney
2020-08-11 16:19                 ` Paul E. McKenney
2020-08-11 19:39               ` Thomas Gleixner
2020-08-11 21:09                 ` Paul E. McKenney
2020-08-12  0:13                   ` Thomas Gleixner
2020-08-12  4:29                     ` Paul E. McKenney
2020-08-12  8:32                       ` Thomas Gleixner
2020-08-12 13:30                         ` Paul E. McKenney
2020-08-13  7:50                     ` Michal Hocko
2020-08-13  9:58                       ` Uladzislau Rezki
2020-08-13 11:15                         ` Michal Hocko
2020-08-13 13:27                           ` Thomas Gleixner
2020-08-13 13:45                             ` Michal Hocko
2020-08-13 14:32                             ` Matthew Wilcox
2020-08-13 16:14                               ` Thomas Gleixner
2020-08-13 16:22                                 ` Matthew Wilcox
2020-08-13 13:22                         ` Thomas Gleixner
2020-08-13 13:33                           ` Michal Hocko
2020-08-13 14:34                             ` Thomas Gleixner
2020-08-13 14:53                               ` Michal Hocko
2020-08-13 15:41                                 ` Paul E. McKenney
2020-08-13 15:54                                   ` Michal Hocko
2020-08-13 16:04                                     ` Paul E. McKenney
2020-08-13 16:13                                       ` Michal Hocko
2020-08-13 16:29                                         ` Paul E. McKenney
2020-08-13 17:12                                           ` Michal Hocko
2020-08-13 17:27                                             ` Paul E. McKenney
2020-08-13 18:31                                           ` peterz
2020-08-13 19:13                                             ` Michal Hocko
2020-08-13 16:20                                     ` Uladzislau Rezki
2020-08-13 16:36                                       ` Michal Hocko
2020-08-14 11:54                                         ` Uladzislau Rezki
2020-08-13 17:09                                 ` Thomas Gleixner
2020-08-13 17:22                                   ` Michal Hocko
2020-08-14  7:17                                   ` Michal Hocko
2020-08-14 12:15                                     ` Uladzislau Rezki
2020-08-14 12:48                                       ` Michal Hocko
2020-08-14 13:34                                         ` Paul E. McKenney
2020-08-14 14:06                                           ` Michal Hocko
2020-08-14 18:01                                             ` Paul E. McKenney
2020-08-14 23:14                                               ` Thomas Gleixner
2020-08-14 23:41                                                 ` Paul E. McKenney
2020-08-15  0:43                                                   ` Thomas Gleixner
2020-08-15  3:01                                                     ` Paul E. McKenney
2020-08-15  8:27                                                 ` Peter Zijlstra
2020-08-15 13:03                                                   ` Paul E. McKenney
2020-08-15  8:42                                                 ` Peter Zijlstra
2020-08-15 14:18                                                   ` Paul E. McKenney
2020-08-15 14:23                                                     ` Paul E. McKenney
2020-08-17  8:47                                                 ` Michal Hocko
2020-08-13 18:26                               ` peterz
2020-08-13 18:52                                 ` Paul E. McKenney
2020-08-13 22:06                                   ` peterz
2020-08-13 23:23                                     ` Paul E. McKenney
2020-08-13 23:59                                     ` Thomas Gleixner
2020-08-14  8:30                                       ` Peter Zijlstra
2020-08-14 10:23                                         ` peterz
2020-08-14 15:26                                           ` Paul E. McKenney
2020-08-14 14:14                                         ` Paul E. McKenney
2020-08-14 16:11                                           ` Paul E. McKenney
2020-08-14 17:49                                             ` Peter Zijlstra
2020-08-14 18:02                                               ` Paul E. McKenney
2020-08-14 19:33                                                 ` Thomas Gleixner
2020-08-14 20:41                                                   ` Paul E. McKenney
2020-08-14 21:52                                                     ` Peter Zijlstra
2020-08-14 23:27                                                       ` Paul E. McKenney
2020-08-14 23:40                                                       ` Thomas Gleixner
2020-08-16 22:56                                                       ` Uladzislau Rezki
2020-08-17  8:28                                                         ` Michal Hocko
2020-08-17 10:36                                                           ` Uladzislau Rezki
2020-08-17 22:28                                                           ` Paul E. McKenney
2020-08-18  7:43                                                             ` Michal Hocko
2020-08-18 13:53                                                               ` Paul E. McKenney
2020-08-18 14:43                                                                 ` Thomas Gleixner
2020-08-18 16:13                                                                   ` Paul E. McKenney
2020-08-18 16:55                                                                     ` Thomas Gleixner
2020-08-18 17:13                                                                       ` Paul E. McKenney
2020-08-18 23:26                                                                         ` Thomas Gleixner
2020-08-19 23:07                                                                           ` Paul E. McKenney
2020-08-18 15:02                                                                 ` Michal Hocko
2020-08-18 15:45                                                                   ` Uladzislau Rezki
2020-08-18 16:18                                                                   ` Paul E. McKenney
2020-08-14 16:19                                           ` peterz
2020-08-14 18:15                                             ` Paul E. McKenney
2020-08-13 13:29                         ` Uladzislau Rezki
2020-08-13 13:41                           ` Michal Hocko
2020-08-13 14:22                             ` Uladzislau Rezki
2020-08-09 20:43 ` [PATCH 2/2] rcu/tree: use " Uladzislau Rezki (Sony)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200809204354.20137-2-urezki@gmail.com \
    --to=urezki@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=joel@joelfernandes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=oleksiy.avramchenko@sonymobile.com \
    --cc=paulmck@kernel.org \
    --cc=rcu@vger.kernel.org \
    --cc=tytso@mit.edu \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.