* [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes
@ 2021-04-15 17:19 Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 1/5] kvfree_rcu: Release a page cache under memory pressure Uladzislau Rezki (Sony)
` (5 more replies)
0 siblings, 6 replies; 8+ messages in thread
From: Uladzislau Rezki (Sony) @ 2021-04-15 17:19 UTC (permalink / raw)
To: LKML, RCU, Paul E . McKenney
Cc: Michal Hocko, Andrew Morton, Daniel Axtens, Frederic Weisbecker,
Neeraj Upadhyay, Joel Fernandes, Peter Zijlstra, Thomas Gleixner,
Theodore Y . Ts'o, Sebastian Andrzej Siewior,
Uladzislau Rezki, Oleksiy Avramchenko
This is a v2 of a small series. See the changelog below:
V1 -> V2:
- document the rcu_delay_page_cache_fill_msec parameter;
- drop the "kvfree_rcu: introduce "flags" variable" patch;
- reword commit messages;
- in the patch [1], do not use READ_ONCE() instances in
get_cached_bnode()/put_cached_bnode() it is protected
by the lock.
- Capitalize the word following by ":" in commit messages.
Uladzislau Rezki (Sony) (4):
[1] kvfree_rcu: Use [READ/WRITE]_ONCE() macros to access to nr_bkv_objs
[2] kvfree_rcu: Add a bulk-list check when a scheduler is run
[3] kvfree_rcu: Update "monitor_todo" once a batch is started
[4] kvfree_rcu: Use kfree_rcu_monitor() instead of open-coded variant
Zhang Qiang (1):
[5] kvfree_rcu: Release a page cache under memory pressure
.../admin-guide/kernel-parameters.txt | 5 +
kernel/rcu/tree.c | 92 +++++++++++++++----
2 files changed, 77 insertions(+), 20 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 1/5] kvfree_rcu: Release a page cache under memory pressure
2021-04-15 17:19 [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Uladzislau Rezki (Sony)
@ 2021-04-15 17:19 ` Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 2/5] kvfree_rcu: Use [READ/WRITE]_ONCE() macros to access to nr_bkv_objs Uladzislau Rezki (Sony)
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Uladzislau Rezki (Sony) @ 2021-04-15 17:19 UTC (permalink / raw)
To: LKML, RCU, Paul E . McKenney
Cc: Michal Hocko, Andrew Morton, Daniel Axtens, Frederic Weisbecker,
Neeraj Upadhyay, Joel Fernandes, Peter Zijlstra, Thomas Gleixner,
Theodore Y . Ts'o, Sebastian Andrzej Siewior,
Uladzislau Rezki, Oleksiy Avramchenko, Zhang Qiang
From: Zhang Qiang <qiang.zhang@windriver.com>
Add a drain_page_cache() function to drain a per-cpu page cache.
The reason behind of it is a system can run into a low memory
condition, in that case a page shrinker can ask for its users
to free their caches in order to get extra memory available for
other needs in a system.
When a system hits such condition, a page cache is drained for
all CPUs in a system. By default a page cache work is delayed
with 5 seconds interval until a memory pressure disappears, if
needed it can be changed. See a rcu_delay_page_cache_fill_msec
module parameter.
Co-developed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Zqiang <qiang.zhang@windriver.com>
---
.../admin-guide/kernel-parameters.txt | 5 ++
kernel/rcu/tree.c | 82 +++++++++++++++++--
2 files changed, 78 insertions(+), 9 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 78dc87435ca7..6b769f5cf14c 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4154,6 +4154,11 @@
whole algorithm to behave better in low memory
condition.
+ rcutree.rcu_delay_page_cache_fill_msec= [KNL]
+ Set delay for a page-cache refill when a low memory
+ condition occurs. That is in milliseconds. Allowed
+ value is within a 0:100000 range.
+
rcutree.jiffies_till_first_fqs= [KNL]
Set delay from grace-period initialization to
first attempt to force quiescent states.
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 2c9cf4df942c..742152d6b952 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -186,6 +186,17 @@ module_param(rcu_unlock_delay, int, 0444);
static int rcu_min_cached_objs = 5;
module_param(rcu_min_cached_objs, int, 0444);
+// A page shrinker can ask for freeing extra pages to get them
+// available for other needs in a system. Usually it happens
+// under low memory condition, in that case we should hold on
+// a bit with page cache filling.
+//
+// Default value is 5 seconds. That is long enough to reduce
+// an interfering and racing with a shrinker where the cache
+// is drained.
+static int rcu_delay_page_cache_fill_msec = 5000;
+module_param(rcu_delay_page_cache_fill_msec, int, 0444);
+
/* Retrieve RCU kthreads priority for rcutorture */
int rcu_get_gp_kthreads_prio(void)
{
@@ -3144,6 +3155,7 @@ struct kfree_rcu_cpu_work {
* Even though it is lockless an access has to be protected by the
* per-cpu lock.
* @page_cache_work: A work to refill the cache when it is empty
+ * @backoff_page_cache_fill: Delay a cache filling
* @work_in_progress: Indicates that page_cache_work is running
* @hrtimer: A hrtimer for scheduling a page_cache_work
* @nr_bkv_objs: number of allocated objects at @bkvcache.
@@ -3163,7 +3175,8 @@ struct kfree_rcu_cpu {
bool initialized;
int count;
- struct work_struct page_cache_work;
+ struct delayed_work page_cache_work;
+ atomic_t backoff_page_cache_fill;
atomic_t work_in_progress;
struct hrtimer hrtimer;
@@ -3229,6 +3242,26 @@ put_cached_bnode(struct kfree_rcu_cpu *krcp,
}
+static int
+drain_page_cache(struct kfree_rcu_cpu *krcp)
+{
+ unsigned long flags;
+ struct llist_node *page_list, *pos, *n;
+ int freed = 0;
+
+ raw_spin_lock_irqsave(&krcp->lock, flags);
+ page_list = llist_del_all(&krcp->bkvcache);
+ krcp->nr_bkv_objs = 0;
+ raw_spin_unlock_irqrestore(&krcp->lock, flags);
+
+ llist_for_each_safe(pos, n, page_list) {
+ free_page((unsigned long)pos);
+ freed++;
+ }
+
+ return freed;
+}
+
/*
* This function is invoked in workqueue context after a grace period.
* It frees all the objects queued on ->bhead_free or ->head_free.
@@ -3419,7 +3452,7 @@ schedule_page_work_fn(struct hrtimer *t)
struct kfree_rcu_cpu *krcp =
container_of(t, struct kfree_rcu_cpu, hrtimer);
- queue_work(system_highpri_wq, &krcp->page_cache_work);
+ queue_delayed_work(system_highpri_wq, &krcp->page_cache_work, 0);
return HRTIMER_NORESTART;
}
@@ -3428,12 +3461,16 @@ static void fill_page_cache_func(struct work_struct *work)
struct kvfree_rcu_bulk_data *bnode;
struct kfree_rcu_cpu *krcp =
container_of(work, struct kfree_rcu_cpu,
- page_cache_work);
+ page_cache_work.work);
unsigned long flags;
+ int nr_pages;
bool pushed;
int i;
- for (i = 0; i < rcu_min_cached_objs; i++) {
+ nr_pages = atomic_read(&krcp->backoff_page_cache_fill) ?
+ 1 : rcu_min_cached_objs;
+
+ for (i = 0; i < nr_pages; i++) {
bnode = (struct kvfree_rcu_bulk_data *)
__get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
@@ -3450,6 +3487,7 @@ static void fill_page_cache_func(struct work_struct *work)
}
atomic_set(&krcp->work_in_progress, 0);
+ atomic_set(&krcp->backoff_page_cache_fill, 0);
}
static void
@@ -3457,10 +3495,15 @@ run_page_cache_worker(struct kfree_rcu_cpu *krcp)
{
if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
!atomic_xchg(&krcp->work_in_progress, 1)) {
- hrtimer_init(&krcp->hrtimer, CLOCK_MONOTONIC,
- HRTIMER_MODE_REL);
- krcp->hrtimer.function = schedule_page_work_fn;
- hrtimer_start(&krcp->hrtimer, 0, HRTIMER_MODE_REL);
+ if (atomic_read(&krcp->backoff_page_cache_fill)) {
+ queue_delayed_work(system_wq,
+ &krcp->page_cache_work,
+ msecs_to_jiffies(rcu_delay_page_cache_fill_msec));
+ } else {
+ hrtimer_init(&krcp->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ krcp->hrtimer.function = schedule_page_work_fn;
+ hrtimer_start(&krcp->hrtimer, 0, HRTIMER_MODE_REL);
+ }
}
}
@@ -3612,12 +3655,19 @@ kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
{
int cpu;
unsigned long count = 0;
+ unsigned long flags;
/* Snapshot count of all CPUs */
for_each_possible_cpu(cpu) {
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
count += READ_ONCE(krcp->count);
+
+ raw_spin_lock_irqsave(&krcp->lock, flags);
+ count += krcp->nr_bkv_objs;
+ raw_spin_unlock_irqrestore(&krcp->lock, flags);
+
+ atomic_set(&krcp->backoff_page_cache_fill, 1);
}
return count;
@@ -3634,6 +3684,8 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
count = krcp->count;
+ count += drain_page_cache(krcp);
+
raw_spin_lock_irqsave(&krcp->lock, flags);
if (krcp->monitor_todo)
kfree_rcu_drain_unlock(krcp, flags);
@@ -4599,6 +4651,18 @@ static void __init kfree_rcu_batch_init(void)
int cpu;
int i;
+ /* Clamp it to [0:100] seconds interval. */
+ if (rcu_delay_page_cache_fill_msec < 0 ||
+ rcu_delay_page_cache_fill_msec > 100 * MSEC_PER_SEC) {
+
+ rcu_delay_page_cache_fill_msec =
+ clamp(rcu_delay_page_cache_fill_msec, 0,
+ (int) (100 * MSEC_PER_SEC));
+
+ pr_info("Adjusting a cache fill delay interval to %d ms.\n",
+ rcu_delay_page_cache_fill_msec);
+ }
+
for_each_possible_cpu(cpu) {
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
@@ -4608,7 +4672,7 @@ static void __init kfree_rcu_batch_init(void)
}
INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor);
- INIT_WORK(&krcp->page_cache_work, fill_page_cache_func);
+ INIT_DELAYED_WORK(&krcp->page_cache_work, fill_page_cache_func);
krcp->initialized = true;
}
if (register_shrinker(&kfree_rcu_shrinker))
--
2.20.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 2/5] kvfree_rcu: Use [READ/WRITE]_ONCE() macros to access to nr_bkv_objs
2021-04-15 17:19 [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 1/5] kvfree_rcu: Release a page cache under memory pressure Uladzislau Rezki (Sony)
@ 2021-04-15 17:19 ` Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 3/5] kvfree_rcu: Add a bulk-list check when a scheduler is run Uladzislau Rezki (Sony)
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Uladzislau Rezki (Sony) @ 2021-04-15 17:19 UTC (permalink / raw)
To: LKML, RCU, Paul E . McKenney
Cc: Michal Hocko, Andrew Morton, Daniel Axtens, Frederic Weisbecker,
Neeraj Upadhyay, Joel Fernandes, Peter Zijlstra, Thomas Gleixner,
Theodore Y . Ts'o, Sebastian Andrzej Siewior,
Uladzislau Rezki, Oleksiy Avramchenko
nr_bkv_objs represents the counter of objects in the page-cache.
Accessing to it requires taking the lock. Switch to READ_ONCE()
WRITE_ONCE() macros to provide an atomic access to that counter.
A shrinker is one of the user of it.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
kernel/rcu/tree.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 742152d6b952..07e718fdea12 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3223,7 +3223,7 @@ get_cached_bnode(struct kfree_rcu_cpu *krcp)
if (!krcp->nr_bkv_objs)
return NULL;
- krcp->nr_bkv_objs--;
+ WRITE_ONCE(krcp->nr_bkv_objs, krcp->nr_bkv_objs - 1);
return (struct kvfree_rcu_bulk_data *)
llist_del_first(&krcp->bkvcache);
}
@@ -3237,9 +3237,8 @@ put_cached_bnode(struct kfree_rcu_cpu *krcp,
return false;
llist_add((struct llist_node *) bnode, &krcp->bkvcache);
- krcp->nr_bkv_objs++;
+ WRITE_ONCE(krcp->nr_bkv_objs, krcp->nr_bkv_objs + 1);
return true;
-
}
static int
@@ -3251,7 +3250,7 @@ drain_page_cache(struct kfree_rcu_cpu *krcp)
raw_spin_lock_irqsave(&krcp->lock, flags);
page_list = llist_del_all(&krcp->bkvcache);
- krcp->nr_bkv_objs = 0;
+ WRITE_ONCE(krcp->nr_bkv_objs, 0);
raw_spin_unlock_irqrestore(&krcp->lock, flags);
llist_for_each_safe(pos, n, page_list) {
@@ -3655,18 +3654,13 @@ kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
{
int cpu;
unsigned long count = 0;
- unsigned long flags;
/* Snapshot count of all CPUs */
for_each_possible_cpu(cpu) {
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
count += READ_ONCE(krcp->count);
-
- raw_spin_lock_irqsave(&krcp->lock, flags);
- count += krcp->nr_bkv_objs;
- raw_spin_unlock_irqrestore(&krcp->lock, flags);
-
+ count += READ_ONCE(krcp->nr_bkv_objs);
atomic_set(&krcp->backoff_page_cache_fill, 1);
}
--
2.20.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 3/5] kvfree_rcu: Add a bulk-list check when a scheduler is run
2021-04-15 17:19 [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 1/5] kvfree_rcu: Release a page cache under memory pressure Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 2/5] kvfree_rcu: Use [READ/WRITE]_ONCE() macros to access to nr_bkv_objs Uladzislau Rezki (Sony)
@ 2021-04-15 17:19 ` Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 4/5] kvfree_rcu: Update "monitor_todo" once a batch is started Uladzislau Rezki (Sony)
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Uladzislau Rezki (Sony) @ 2021-04-15 17:19 UTC (permalink / raw)
To: LKML, RCU, Paul E . McKenney
Cc: Michal Hocko, Andrew Morton, Daniel Axtens, Frederic Weisbecker,
Neeraj Upadhyay, Joel Fernandes, Peter Zijlstra, Thomas Gleixner,
Theodore Y . Ts'o, Sebastian Andrzej Siewior,
Uladzislau Rezki, Oleksiy Avramchenko
RCU_SCHEDULER_RUNNING is set when a scheduling is available.
That signal is used in order to check and queue a "monitor work"
to reclaim freed objects(if they are) during a boot-up phase.
We have it because, the main path of the kvfree_rcu() call can
not queue the work untill the scheduler is up and running.
Currently in such helper only "krcp->head" is checked to figure
out if there are outstanding objects to be released. And this is
only one channel. After adding a bulk interface there are two
extra which have to be checked also: "krcp->bkvhead[0]" as well
as "krcp->bkvhead[1]". So, we have to queue the "monitor work"
if _any_ corresponding channel is not empty.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
kernel/rcu/tree.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 07e718fdea12..3ddc9dc97487 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3712,7 +3712,8 @@ void __init kfree_rcu_scheduler_running(void)
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
raw_spin_lock_irqsave(&krcp->lock, flags);
- if (!krcp->head || krcp->monitor_todo) {
+ if ((!krcp->bkvhead[0] && !krcp->bkvhead[1] && !krcp->head) ||
+ krcp->monitor_todo) {
raw_spin_unlock_irqrestore(&krcp->lock, flags);
continue;
}
--
2.20.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 4/5] kvfree_rcu: Update "monitor_todo" once a batch is started
2021-04-15 17:19 [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Uladzislau Rezki (Sony)
` (2 preceding siblings ...)
2021-04-15 17:19 ` [PATCH v2 3/5] kvfree_rcu: Add a bulk-list check when a scheduler is run Uladzislau Rezki (Sony)
@ 2021-04-15 17:19 ` Uladzislau Rezki (Sony)
2021-04-15 17:20 ` [PATCH v2 5/5] kvfree_rcu: Use kfree_rcu_monitor() instead of open-coded variant Uladzislau Rezki (Sony)
2021-04-16 1:10 ` [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Paul E. McKenney
5 siblings, 0 replies; 8+ messages in thread
From: Uladzislau Rezki (Sony) @ 2021-04-15 17:19 UTC (permalink / raw)
To: LKML, RCU, Paul E . McKenney
Cc: Michal Hocko, Andrew Morton, Daniel Axtens, Frederic Weisbecker,
Neeraj Upadhyay, Joel Fernandes, Peter Zijlstra, Thomas Gleixner,
Theodore Y . Ts'o, Sebastian Andrzej Siewior,
Uladzislau Rezki, Oleksiy Avramchenko
Before attempting of starting a new batch a "monitor_todo" var.
is set to "false" and set back to "true" when a previous RCU
batch is still in progress.
Drop it to "false" only when a new batch has been successfully
queued, if not, it stays active anyway. There is no reason in
setting it force and back.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
kernel/rcu/tree.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 3ddc9dc97487..17c128d93825 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3415,15 +3415,14 @@ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
unsigned long flags)
{
// Attempt to start a new batch.
- krcp->monitor_todo = false;
if (queue_kfree_rcu_work(krcp)) {
// Success! Our job is done here.
+ krcp->monitor_todo = false;
raw_spin_unlock_irqrestore(&krcp->lock, flags);
return;
}
// Previous RCU batch still in progress, try again later.
- krcp->monitor_todo = true;
schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
raw_spin_unlock_irqrestore(&krcp->lock, flags);
}
--
2.20.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 5/5] kvfree_rcu: Use kfree_rcu_monitor() instead of open-coded variant
2021-04-15 17:19 [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Uladzislau Rezki (Sony)
` (3 preceding siblings ...)
2021-04-15 17:19 ` [PATCH v2 4/5] kvfree_rcu: Update "monitor_todo" once a batch is started Uladzislau Rezki (Sony)
@ 2021-04-15 17:20 ` Uladzislau Rezki (Sony)
2021-04-16 1:10 ` [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Paul E. McKenney
5 siblings, 0 replies; 8+ messages in thread
From: Uladzislau Rezki (Sony) @ 2021-04-15 17:20 UTC (permalink / raw)
To: LKML, RCU, Paul E . McKenney
Cc: Michal Hocko, Andrew Morton, Daniel Axtens, Frederic Weisbecker,
Neeraj Upadhyay, Joel Fernandes, Peter Zijlstra, Thomas Gleixner,
Theodore Y . Ts'o, Sebastian Andrzej Siewior,
Uladzislau Rezki, Oleksiy Avramchenko
To queue a new batch we have a kfree_rcu_monitor() function that
checks "monitor_todo" var. and invokes kfree_rcu_drain_unlock()
to start a new batch after a GP. Get rid of open-coded case by
switching it to the separate function.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
kernel/rcu/tree.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 17c128d93825..b3e04c4fefcf 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3670,7 +3670,6 @@ static unsigned long
kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
{
int cpu, freed = 0;
- unsigned long flags;
for_each_possible_cpu(cpu) {
int count;
@@ -3678,12 +3677,7 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
count = krcp->count;
count += drain_page_cache(krcp);
-
- raw_spin_lock_irqsave(&krcp->lock, flags);
- if (krcp->monitor_todo)
- kfree_rcu_drain_unlock(krcp, flags);
- else
- raw_spin_unlock_irqrestore(&krcp->lock, flags);
+ kfree_rcu_monitor(&krcp->monitor_work.work);
sc->nr_to_scan -= count;
freed += count;
--
2.20.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes
2021-04-15 17:19 [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Uladzislau Rezki (Sony)
` (4 preceding siblings ...)
2021-04-15 17:20 ` [PATCH v2 5/5] kvfree_rcu: Use kfree_rcu_monitor() instead of open-coded variant Uladzislau Rezki (Sony)
@ 2021-04-16 1:10 ` Paul E. McKenney
2021-04-16 10:14 ` Uladzislau Rezki
5 siblings, 1 reply; 8+ messages in thread
From: Paul E. McKenney @ 2021-04-16 1:10 UTC (permalink / raw)
To: Uladzislau Rezki (Sony)
Cc: LKML, RCU, Michal Hocko, Andrew Morton, Daniel Axtens,
Frederic Weisbecker, Neeraj Upadhyay, Joel Fernandes,
Peter Zijlstra, Thomas Gleixner, Theodore Y . Ts'o,
Sebastian Andrzej Siewior, Oleksiy Avramchenko
On Thu, Apr 15, 2021 at 07:19:55PM +0200, Uladzislau Rezki (Sony) wrote:
> This is a v2 of a small series. See the changelog below:
>
> V1 -> V2:
> - document the rcu_delay_page_cache_fill_msec parameter;
> - drop the "kvfree_rcu: introduce "flags" variable" patch;
> - reword commit messages;
> - in the patch [1], do not use READ_ONCE() instances in
> get_cached_bnode()/put_cached_bnode() it is protected
> by the lock.
> - Capitalize the word following by ":" in commit messages.
>
> Uladzislau Rezki (Sony) (4):
> [1] kvfree_rcu: Use [READ/WRITE]_ONCE() macros to access to nr_bkv_objs
> [2] kvfree_rcu: Add a bulk-list check when a scheduler is run
> [3] kvfree_rcu: Update "monitor_todo" once a batch is started
> [4] kvfree_rcu: Use kfree_rcu_monitor() instead of open-coded variant
>
> Zhang Qiang (1):
> [5] kvfree_rcu: Release a page cache under memory pressure
I have queued these, thank you both! And they pass touch tests, but
could you please check that "git am -3" correctly resolved a couple of
conflicts, one in Documentation/admin-guide/kernel-parameters.txt and
the other in kernel/rcu/tree.c?
Thanx, Paul
> .../admin-guide/kernel-parameters.txt | 5 +
> kernel/rcu/tree.c | 92 +++++++++++++++----
> 2 files changed, 77 insertions(+), 20 deletions(-)
>
> --
> 2.20.1
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes
2021-04-16 1:10 ` [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Paul E. McKenney
@ 2021-04-16 10:14 ` Uladzislau Rezki
0 siblings, 0 replies; 8+ messages in thread
From: Uladzislau Rezki @ 2021-04-16 10:14 UTC (permalink / raw)
To: Paul E. McKenney
Cc: Uladzislau Rezki (Sony),
LKML, RCU, Michal Hocko, Andrew Morton, Daniel Axtens,
Frederic Weisbecker, Neeraj Upadhyay, Joel Fernandes,
Peter Zijlstra, Thomas Gleixner, Theodore Y . Ts'o,
Sebastian Andrzej Siewior, Oleksiy Avramchenko
On Thu, Apr 15, 2021 at 06:10:26PM -0700, Paul E. McKenney wrote:
> On Thu, Apr 15, 2021 at 07:19:55PM +0200, Uladzislau Rezki (Sony) wrote:
> > This is a v2 of a small series. See the changelog below:
> >
> > V1 -> V2:
> > - document the rcu_delay_page_cache_fill_msec parameter;
> > - drop the "kvfree_rcu: introduce "flags" variable" patch;
> > - reword commit messages;
> > - in the patch [1], do not use READ_ONCE() instances in
> > get_cached_bnode()/put_cached_bnode() it is protected
> > by the lock.
> > - Capitalize the word following by ":" in commit messages.
> >
> > Uladzislau Rezki (Sony) (4):
> > [1] kvfree_rcu: Use [READ/WRITE]_ONCE() macros to access to nr_bkv_objs
> > [2] kvfree_rcu: Add a bulk-list check when a scheduler is run
> > [3] kvfree_rcu: Update "monitor_todo" once a batch is started
> > [4] kvfree_rcu: Use kfree_rcu_monitor() instead of open-coded variant
> >
> > Zhang Qiang (1):
> > [5] kvfree_rcu: Release a page cache under memory pressure
>
> I have queued these, thank you both! And they pass touch tests, but
> could you please check that "git am -3" correctly resolved a couple of
> conflicts, one in Documentation/admin-guide/kernel-parameters.txt and
> the other in kernel/rcu/tree.c?
>
Thanks!
I have double checked it. I see that everything is in place and
has been correctly applied on your latest "dev".
--
Vlad Rezki
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-04-16 10:14 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-15 17:19 [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 1/5] kvfree_rcu: Release a page cache under memory pressure Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 2/5] kvfree_rcu: Use [READ/WRITE]_ONCE() macros to access to nr_bkv_objs Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 3/5] kvfree_rcu: Add a bulk-list check when a scheduler is run Uladzislau Rezki (Sony)
2021-04-15 17:19 ` [PATCH v2 4/5] kvfree_rcu: Update "monitor_todo" once a batch is started Uladzislau Rezki (Sony)
2021-04-15 17:20 ` [PATCH v2 5/5] kvfree_rcu: Use kfree_rcu_monitor() instead of open-coded variant Uladzislau Rezki (Sony)
2021-04-16 1:10 ` [PATCH v2 0/5] kvfree_rcu() miscellaneous fixes Paul E. McKenney
2021-04-16 10:14 ` Uladzislau Rezki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).