All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] mm: swap: remove lru drain waiters
@ 2020-06-01 14:37 Hillf Danton
  2020-06-01 15:41 ` Konstantin Khlebnikov
  2020-06-03  8:21 ` Sebastian Andrzej Siewior
  0 siblings, 2 replies; 5+ messages in thread
From: Hillf Danton @ 2020-06-01 14:37 UTC (permalink / raw)
  To: linux-mm
  Cc: LKML, Sebastian Andrzej Siewior, Konstantin Khlebnikov, Hillf Danton


After updating the lru drain sequence, new comers avoid waiting for
the current drainer, because he is flushing works on each online CPU,
by trying to lock the mutex; the drainer OTOH tries to do works for
those who fail to acquire the lock by checking the lru drain sequence
after releasing lock.

See eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
for reasons why we can skip waiting for the lock.

The memory barriers around the sequence and the lock come together
to remove waiters without their drain works bandoned.

Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Hillf Danton <hdanton@sina.com>
---
This is inspired by one of the works from Sebastian.

--- a/mm/swap.c
+++ b/mm/swap.c
@@ -714,10 +714,11 @@ static void lru_add_drain_per_cpu(struct
  */
 void lru_add_drain_all(void)
 {
-	static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
+	static unsigned int lru_drain_seq;
 	static DEFINE_MUTEX(lock);
 	static struct cpumask has_work;
-	int cpu, seq;
+	int cpu;
+	unsigned int seq;
 
 	/*
 	 * Make sure nobody triggers this path before mm_percpu_wq is fully
@@ -726,18 +727,16 @@ void lru_add_drain_all(void)
 	if (WARN_ON(!mm_percpu_wq))
 		return;
 
-	seq = raw_read_seqcount_latch(&seqcount);
+	lru_drain_seq++;
+	smp_mb();
 
-	mutex_lock(&lock);
+more_work:
 
-	/*
-	 * Piggyback on drain started and finished while we waited for lock:
-	 * all pages pended at the time of our enter were drained from vectors.
-	 */
-	if (__read_seqcount_retry(&seqcount, seq))
-		goto done;
+	if (!mutex_trylock(&lock))
+		return;
 
-	raw_write_seqcount_latch(&seqcount);
+	smp_mb();
+	seq = lru_drain_seq;
 
 	cpumask_clear(&has_work);
 
@@ -759,8 +758,11 @@ void lru_add_drain_all(void)
 	for_each_cpu(cpu, &has_work)
 		flush_work(&per_cpu(lru_add_drain_work, cpu));
 
-done:
 	mutex_unlock(&lock);
+
+	smp_mb();
+	if (seq != lru_drain_seq)
+		goto more_work;
 }
 #else
 void lru_add_drain_all(void)
--



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-06-03 13:39 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-01 14:37 [RFC PATCH] mm: swap: remove lru drain waiters Hillf Danton
2020-06-01 15:41 ` Konstantin Khlebnikov
2020-06-03  8:21 ` Sebastian Andrzej Siewior
2020-06-03 10:24   ` Ahmed S. Darwish
2020-06-03 13:39     ` Hillf Danton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.