linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems
@ 2019-09-11 15:05 Waiman Long
  2019-09-11 15:05 ` [PATCH 1/5] locking/rwsem: Add down_write_timedlock() Waiman Long
                   ` (6 more replies)
  0 siblings, 7 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-11 15:05 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro, Mike Kravetz
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso, Waiman Long

A customer with large SMP systems (up to 16 sockets) with application
that uses large amount of static hugepages (~500-1500GB) are experiencing
random multisecond delays. These delays was caused by the long time it
took to scan the VMA interval tree with mmap_sem held.

To fix this problem while perserving existing behavior as much as
possible, we need to allow timeout in down_write() and disabling PMD
sharing when it is taking too long to do so. Since a transaction can
involving touching multiple huge pages, timing out for each of the huge
page interactions does not completely solve the problem. So a threshold
is set to completely disable PMD sharing if too many timeouts happen.

The first 4 patches of this 5-patch series adds a new
down_write_timedlock() API which accepts a timeout argument and return
true is locking is successful or false otherwise. It works more or less
than a down_write_trylock() but the calling thread may sleep.

The last patch implements the timeout mechanism as described above. With
the patched kernel installed, the customer confirmed that the problem
was gone.

Waiman Long (5):
  locking/rwsem: Add down_write_timedlock()
  locking/rwsem: Enable timeout check when spinning on owner
  locking/osq: Allow early break from OSQ
  locking/rwsem: Enable timeout check when staying in the OSQ
  hugetlbfs: Limit wait time when trying to share huge PMD

 include/linux/fs.h                |   7 ++
 include/linux/osq_lock.h          |  13 +--
 include/linux/rwsem.h             |   4 +-
 kernel/locking/lock_events_list.h |   1 +
 kernel/locking/mutex.c            |   2 +-
 kernel/locking/osq_lock.c         |  12 +-
 kernel/locking/rwsem.c            | 183 +++++++++++++++++++++++++-----
 mm/hugetlb.c                      |  24 +++-
 8 files changed, 201 insertions(+), 45 deletions(-)

-- 
2.18.1



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 1/5] locking/rwsem: Add down_write_timedlock()
  2019-09-11 15:05 [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Waiman Long
@ 2019-09-11 15:05 ` Waiman Long
  2019-09-11 15:05 ` [PATCH 2/5] locking/rwsem: Enable timeout check when spinning on owner Waiman Long
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-11 15:05 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro, Mike Kravetz
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso, Waiman Long

There are cases where a task wants to acquire a rwsem but doesn't want
to wait for an indefinite period of time. Instead, a task may want
an alternative way of dealing with the inability to acquire the lock
after a certain period of time. There are also cases where waiting
indefinitely can potentially lead to deadlock. Doing it by using a
trylock loop is inelegant as it increases cacheline contention and is
difficult to control the actual wait time.

To address this dilemma, a new down_write_timedlock() variant
is introduced which allows an additional ktime_t timeout argument
(currently in ns) relative to now. With this new API, a task can now
wait for a given period of time and bail out when the lock cannot be
acquired within the given period.

In reality, the actual wait time is likely to be longer than the
given time. Timeout checking isn't done when doing optimistic spinning.
Therefore a short timeout smaller than the scheduling period may be
less accurate.

From the lockdep perspective, down_write_timedlock() is treated similar
to down_write_trylock().

A similar down_read_timedlock() may be added later on when the need
arises.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 include/linux/rwsem.h             |  4 +-
 kernel/locking/lock_events_list.h |  1 +
 kernel/locking/rwsem.c            | 85 +++++++++++++++++++++++++++++--
 3 files changed, 85 insertions(+), 5 deletions(-)

diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index 00d6054687dd..b3c7c5afde46 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -15,6 +15,7 @@
 #include <linux/list.h>
 #include <linux/spinlock.h>
 #include <linux/atomic.h>
+#include <linux/ktime.h>
 #include <linux/err.h>
 #ifdef CONFIG_RWSEM_SPIN_ON_OWNER
 #include <linux/osq_lock.h>
@@ -139,9 +140,10 @@ extern void down_write(struct rw_semaphore *sem);
 extern int __must_check down_write_killable(struct rw_semaphore *sem);
 
 /*
- * trylock for writing -- returns 1 if successful, 0 if contention
+ * trylock or timedlock for writing -- returns 1 if successful, 0 if failed
  */
 extern int down_write_trylock(struct rw_semaphore *sem);
+extern int down_write_timedlock(struct rw_semaphore *sem, ktime_t timeout);
 
 /*
  * release a read lock
diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h
index 239039d0ce21..c2345e0472b0 100644
--- a/kernel/locking/lock_events_list.h
+++ b/kernel/locking/lock_events_list.h
@@ -69,3 +69,4 @@ LOCK_EVENT(rwsem_rlock_handoff)	/* # of read lock handoffs		*/
 LOCK_EVENT(rwsem_wlock)		/* # of write locks acquired		*/
 LOCK_EVENT(rwsem_wlock_fail)	/* # of failed write lock acquisitions	*/
 LOCK_EVENT(rwsem_wlock_handoff)	/* # of write lock handoffs		*/
+LOCK_EVENT(rwsem_wlock_timeout)	/* # of write lock timeouts		*/
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index eef04551eae7..c0285749c338 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -27,6 +27,7 @@
 #include <linux/export.h>
 #include <linux/rwsem.h>
 #include <linux/atomic.h>
+#include <linux/hrtimer.h>
 
 #include "rwsem.h"
 #include "lock_events.h"
@@ -988,6 +989,26 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
 #define OWNER_NULL	1
 #endif
 
+/*
+ * Set up the hrtimer to fire at a future time relative to now.
+ * Return: The hrtimer_sleeper pointer if success, or NULL if it
+ *	   has timed out.
+ */
+static inline struct hrtimer_sleeper *
+rwsem_setup_hrtimer(struct hrtimer_sleeper *to, ktime_t timeout)
+{
+	ktime_t curtime = ns_to_ktime(sched_clock());
+
+	if (ktime_compare(curtime, timeout) >= 0)
+		return NULL;
+
+	hrtimer_init_sleeper_on_stack(to, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hrtimer_set_expires_range_ns(&to->timer, timeout - curtime,
+				     current->timer_slack_ns);
+	hrtimer_start_expires(&to->timer, HRTIMER_MODE_REL);
+	return to;
+}
+
 /*
  * Wait for the read lock to be granted
  */
@@ -1136,7 +1157,7 @@ static inline void rwsem_disable_reader_optspin(struct rw_semaphore *sem,
  * Wait until we successfully acquire the write lock
  */
 static struct rw_semaphore *
-rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
+rwsem_down_write_slowpath(struct rw_semaphore *sem, int state, ktime_t timeout)
 {
 	long count;
 	bool disable_rspin;
@@ -1144,6 +1165,13 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 	struct rwsem_waiter waiter;
 	struct rw_semaphore *ret = sem;
 	DEFINE_WAKE_Q(wake_q);
+	struct hrtimer_sleeper timer_sleeper, *to = NULL;
+
+	/*
+	 * The timeuot value is now the end time when the timer will expire.
+	 */
+	if (timeout)
+		timeout = ktime_add_ns(timeout, sched_clock());
 
 	/* do optimistic spinning and steal lock if possible */
 	if (rwsem_can_spin_on_owner(sem, RWSEM_WR_NONSPINNABLE) &&
@@ -1235,6 +1263,15 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 			if (signal_pending_state(state, current))
 				goto out_nolock;
 
+			if (timeout) {
+				if (!to)
+					to = rwsem_setup_hrtimer(&timer_sleeper,
+								 timeout);
+				if (!to || !to->task) {
+					lockevent_inc(rwsem_wlock_timeout);
+					goto out_nolock;
+				}
+			}
 			schedule();
 			lockevent_inc(rwsem_sleep_writer);
 			set_current_state(state);
@@ -1273,6 +1310,11 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 	raw_spin_unlock_irq(&sem->wait_lock);
 	lockevent_inc(rwsem_wlock);
 
+out:
+	if (to) {
+		hrtimer_cancel(&to->timer);
+		destroy_hrtimer_on_stack(&to->timer);
+	}
 	return ret;
 
 out_nolock:
@@ -1291,7 +1333,8 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 	wake_up_q(&wake_q);
 	lockevent_inc(rwsem_wlock_fail);
 
-	return ERR_PTR(-EINTR);
+	ret = ERR_PTR(timeout ? -ETIMEDOUT : -EINTR);
+	goto out;
 }
 
 /*
@@ -1389,7 +1432,7 @@ static inline void __down_write(struct rw_semaphore *sem)
 
 	if (unlikely(!atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,
 						      RWSEM_WRITER_LOCKED)))
-		rwsem_down_write_slowpath(sem, TASK_UNINTERRUPTIBLE);
+		rwsem_down_write_slowpath(sem, TASK_UNINTERRUPTIBLE, 0);
 	else
 		rwsem_set_owner(sem);
 }
@@ -1400,7 +1443,7 @@ static inline int __down_write_killable(struct rw_semaphore *sem)
 
 	if (unlikely(!atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,
 						      RWSEM_WRITER_LOCKED))) {
-		if (IS_ERR(rwsem_down_write_slowpath(sem, TASK_KILLABLE)))
+		if (IS_ERR(rwsem_down_write_slowpath(sem, TASK_KILLABLE, 0)))
 			return -EINTR;
 	} else {
 		rwsem_set_owner(sem);
@@ -1408,6 +1451,25 @@ static inline int __down_write_killable(struct rw_semaphore *sem)
 	return 0;
 }
 
+static inline int __down_write_timedlock(struct rw_semaphore *sem,
+					 ktime_t timeout)
+{
+	long tmp = RWSEM_UNLOCKED_VALUE;
+
+	if (unlikely(!atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,
+						      RWSEM_WRITER_LOCKED))) {
+		if (unlikely(timeout <= 0))
+			return false;
+
+		if (IS_ERR(rwsem_down_write_slowpath(sem, TASK_UNINTERRUPTIBLE,
+						     timeout)))
+			return false;
+	} else {
+		rwsem_set_owner(sem);
+	}
+	return true;
+}
+
 static inline int __down_write_trylock(struct rw_semaphore *sem)
 {
 	long tmp;
@@ -1568,6 +1630,21 @@ int down_write_trylock(struct rw_semaphore *sem)
 }
 EXPORT_SYMBOL(down_write_trylock);
 
+/*
+ * lock for writing with timeout (relative to now in ns)
+ */
+int down_write_timedlock(struct rw_semaphore *sem, ktime_t timeout)
+{
+	might_sleep();
+	if (__down_write_timedlock(sem, timeout)) {
+		rwsem_acquire(&sem->dep_map, 0, 1, _RET_IP_);
+		return true;
+	}
+
+	return false;
+}
+EXPORT_SYMBOL(down_write_timedlock);
+
 /*
  * release a read lock
  */
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 2/5] locking/rwsem: Enable timeout check when spinning on owner
  2019-09-11 15:05 [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Waiman Long
  2019-09-11 15:05 ` [PATCH 1/5] locking/rwsem: Add down_write_timedlock() Waiman Long
@ 2019-09-11 15:05 ` Waiman Long
  2019-09-11 15:05 ` [PATCH 3/5] locking/osq: Allow early break from OSQ Waiman Long
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-11 15:05 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro, Mike Kravetz
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso, Waiman Long

When a task is optimistically spinning on the owner, it may do it for a
long time if there is no other running task available in the run queue.
That can be long past the given timeout value.

To prevent that from happening, the rwsem_optimistic_spin() is now
modified to check for the timeout value, if specified, to see if it
should abort early.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/rwsem.c | 67 ++++++++++++++++++++++++++++--------------
 1 file changed, 45 insertions(+), 22 deletions(-)

diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index c0285749c338..49f052d68404 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -716,11 +716,13 @@ rwsem_owner_state(struct task_struct *owner, unsigned long flags, unsigned long
 }
 
 static noinline enum owner_state
-rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
+rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable,
+		    ktime_t timeout)
 {
 	struct task_struct *new, *owner;
 	unsigned long flags, new_flags;
 	enum owner_state state;
+	int loopcnt = 0;
 
 	owner = rwsem_owner_flags(sem, &flags);
 	state = rwsem_owner_state(owner, flags, nonspinnable);
@@ -749,16 +751,22 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
 		 */
 		barrier();
 
-		if (need_resched() || !owner_on_cpu(owner)) {
-			state = OWNER_NONSPINNABLE;
-			break;
-		}
+		if (need_resched() || !owner_on_cpu(owner))
+			goto stop_optspin;
+
+		if (timeout && !(++loopcnt & 0xf) &&
+		   (sched_clock() >= ktime_to_ns(timeout)))
+			goto stop_optspin;
 
 		cpu_relax();
 	}
 	rcu_read_unlock();
 
 	return state;
+
+stop_optspin:
+	rcu_read_unlock();
+	return OWNER_NONSPINNABLE;
 }
 
 /*
@@ -786,12 +794,13 @@ static inline u64 rwsem_rspin_threshold(struct rw_semaphore *sem)
 	return sched_clock() + delta;
 }
 
-static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
+static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock,
+				  ktime_t timeout)
 {
 	bool taken = false;
 	int prev_owner_state = OWNER_NULL;
 	int loop = 0;
-	u64 rspin_threshold = 0;
+	u64 rspin_threshold = 0, curtime;
 	unsigned long nonspinnable = wlock ? RWSEM_WR_NONSPINNABLE
 					   : RWSEM_RD_NONSPINNABLE;
 
@@ -801,6 +810,8 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
 	if (!osq_lock(&sem->osq))
 		goto done;
 
+	curtime = timeout ? sched_clock() : 0;
+
 	/*
 	 * Optimistically spin on the owner field and attempt to acquire the
 	 * lock whenever the owner changes. Spinning will be stopped when:
@@ -810,7 +821,7 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
 	for (;;) {
 		enum owner_state owner_state;
 
-		owner_state = rwsem_spin_on_owner(sem, nonspinnable);
+		owner_state = rwsem_spin_on_owner(sem, nonspinnable, timeout);
 		if (!(owner_state & OWNER_SPINNABLE))
 			break;
 
@@ -823,6 +834,21 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
 		if (taken)
 			break;
 
+		/*
+		 * Check current time once every 16 iterations when
+		 *  1) spinning on reader-owned rwsem; or
+		 *  2) a timeout value is specified.
+		 *
+		 * This is to avoid calling sched_clock() too frequently
+		 * so as to reduce the average latency between the times
+		 * when the lock becomes free and when the spinner is
+		 * ready to do a trylock.
+		 */
+		if ((wlock && (owner_state == OWNER_READER)) || timeout) {
+			if (!(++loop & 0xf))
+				curtime = sched_clock();
+		}
+
 		/*
 		 * Time-based reader-owned rwsem optimistic spinning
 		 */
@@ -838,23 +864,18 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
 				if (rwsem_test_oflags(sem, nonspinnable))
 					break;
 				rspin_threshold = rwsem_rspin_threshold(sem);
-				loop = 0;
 			}
 
-			/*
-			 * Check time threshold once every 16 iterations to
-			 * avoid calling sched_clock() too frequently so
-			 * as to reduce the average latency between the times
-			 * when the lock becomes free and when the spinner
-			 * is ready to do a trylock.
-			 */
-			else if (!(++loop & 0xf) && (sched_clock() > rspin_threshold)) {
+			else if (curtime > rspin_threshold) {
 				rwsem_set_nonspinnable(sem);
 				lockevent_inc(rwsem_opt_nospin);
 				break;
 			}
 		}
 
+		if (timeout && (ns_to_ktime(curtime) >= timeout))
+			break;
+
 		/*
 		 * An RT task cannot do optimistic spinning if it cannot
 		 * be sure the lock holder is running or live-lock may
@@ -968,7 +989,8 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
 	return false;
 }
 
-static inline bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
+static inline bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock,
+					 ktime_t timeout)
 {
 	return false;
 }
@@ -982,7 +1004,8 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
 }
 
 static inline int
-rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
+rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable,
+		    ktime_t timeout)
 {
 	return 0;
 }
@@ -1036,7 +1059,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state)
 	 */
 	atomic_long_add(-RWSEM_READER_BIAS, &sem->count);
 	adjustment = 0;
-	if (rwsem_optimistic_spin(sem, false)) {
+	if (rwsem_optimistic_spin(sem, false, 0)) {
 		/* rwsem_optimistic_spin() implies ACQUIRE on success */
 		/*
 		 * Wake up other readers in the wait list if the front
@@ -1175,7 +1198,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state, ktime_t timeout)
 
 	/* do optimistic spinning and steal lock if possible */
 	if (rwsem_can_spin_on_owner(sem, RWSEM_WR_NONSPINNABLE) &&
-	    rwsem_optimistic_spin(sem, true)) {
+	    rwsem_optimistic_spin(sem, true, timeout)) {
 		/* rwsem_optimistic_spin() implies ACQUIRE on success */
 		return sem;
 	}
@@ -1255,7 +1278,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state, ktime_t timeout)
 		 * without sleeping.
 		 */
 		if ((wstate == WRITER_HANDOFF) &&
-		    (rwsem_spin_on_owner(sem, 0) == OWNER_NULL))
+		    (rwsem_spin_on_owner(sem, 0, 0) == OWNER_NULL))
 			goto trylock_again;
 
 		/* Block until there are no active lockers. */
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 3/5] locking/osq: Allow early break from OSQ
  2019-09-11 15:05 [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Waiman Long
  2019-09-11 15:05 ` [PATCH 1/5] locking/rwsem: Add down_write_timedlock() Waiman Long
  2019-09-11 15:05 ` [PATCH 2/5] locking/rwsem: Enable timeout check when spinning on owner Waiman Long
@ 2019-09-11 15:05 ` Waiman Long
  2019-09-11 15:05 ` [PATCH 4/5] locking/rwsem: Enable timeout check when staying in the OSQ Waiman Long
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-11 15:05 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro, Mike Kravetz
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso, Waiman Long

The current osq_lock() function will spin until it gets the lock or
when its time slice has been used up. There may be other reasons that
a task may want to back out from the OSQ before getting the lock. This
patch extends the osq_lock() function by adding two new arguments - a
break function pointer and its argument.  That break function will be
called, if defined, in each iteration of the loop to see if it should
break out early.

The optimistic_spin_node structure in osq_lock.h isn't needed by callers,
so it is moved into osq_lock.c.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 include/linux/osq_lock.h  | 13 ++-----------
 kernel/locking/mutex.c    |  2 +-
 kernel/locking/osq_lock.c | 12 +++++++++++-
 kernel/locking/rwsem.c    |  2 +-
 4 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/include/linux/osq_lock.h b/include/linux/osq_lock.h
index 5581dbd3bd34..161eb6b26d6d 100644
--- a/include/linux/osq_lock.h
+++ b/include/linux/osq_lock.h
@@ -2,16 +2,6 @@
 #ifndef __LINUX_OSQ_LOCK_H
 #define __LINUX_OSQ_LOCK_H
 
-/*
- * An MCS like lock especially tailored for optimistic spinning for sleeping
- * lock implementations (mutex, rwsem, etc).
- */
-struct optimistic_spin_node {
-	struct optimistic_spin_node *next, *prev;
-	int locked; /* 1 if lock acquired */
-	int cpu; /* encoded CPU # + 1 value */
-};
-
 struct optimistic_spin_queue {
 	/*
 	 * Stores an encoded value of the CPU # of the tail node in the queue.
@@ -30,7 +20,8 @@ static inline void osq_lock_init(struct optimistic_spin_queue *lock)
 	atomic_set(&lock->tail, OSQ_UNLOCKED_VAL);
 }
 
-extern bool osq_lock(struct optimistic_spin_queue *lock);
+extern bool osq_lock(struct optimistic_spin_queue *lock,
+		     bool (*break_fn)(void *), void *break_arg);
 extern void osq_unlock(struct optimistic_spin_queue *lock);
 
 static inline bool osq_is_locked(struct optimistic_spin_queue *lock)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 468a9b8422e3..8a1df82fd71a 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -654,7 +654,7 @@ mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
 		 * acquire the mutex all at once, the spinners need to take a
 		 * MCS (queued) lock first before spinning on the owner field.
 		 */
-		if (!osq_lock(&lock->osq))
+		if (!osq_lock(&lock->osq, NULL, NULL))
 			goto fail;
 	}
 
diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index 6ef600aa0f47..40c94380a485 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -11,6 +11,12 @@
  * called from interrupt context and we have preemption disabled while
  * spinning.
  */
+struct optimistic_spin_node {
+	struct optimistic_spin_node *next, *prev;
+	int locked; /* 1 if lock acquired */
+	int cpu; /* encoded CPU # + 1 value */
+};
+
 static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_node, osq_node);
 
 /*
@@ -87,7 +93,8 @@ osq_wait_next(struct optimistic_spin_queue *lock,
 	return next;
 }
 
-bool osq_lock(struct optimistic_spin_queue *lock)
+bool osq_lock(struct optimistic_spin_queue *lock,
+	      bool (*break_fn)(void *), void *break_arg)
 {
 	struct optimistic_spin_node *node = this_cpu_ptr(&osq_node);
 	struct optimistic_spin_node *prev, *next;
@@ -143,6 +150,9 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 		if (need_resched() || vcpu_is_preempted(node_cpu(node->prev)))
 			goto unqueue;
 
+		if (unlikely(break_fn) && break_fn(break_arg))
+			goto unqueue;
+
 		cpu_relax();
 	}
 	return true;
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 49f052d68404..c15926ecb21e 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -807,7 +807,7 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock,
 	preempt_disable();
 
 	/* sem->wait_lock should not be held when doing optimistic spinning */
-	if (!osq_lock(&sem->osq))
+	if (!osq_lock(&sem->osq, NULL, NULL))
 		goto done;
 
 	curtime = timeout ? sched_clock() : 0;
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 4/5] locking/rwsem: Enable timeout check when staying in the OSQ
  2019-09-11 15:05 [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Waiman Long
                   ` (2 preceding siblings ...)
  2019-09-11 15:05 ` [PATCH 3/5] locking/osq: Allow early break from OSQ Waiman Long
@ 2019-09-11 15:05 ` Waiman Long
  2019-09-11 15:05 ` [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD Waiman Long
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-11 15:05 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro, Mike Kravetz
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso, Waiman Long

Use the break function allowed by the new osq_lock() to enable early
break from the OSQ when a timeout value is specified and expiration
time has been reached.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/rwsem.c | 35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index c15926ecb21e..78708097162a 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -794,23 +794,50 @@ static inline u64 rwsem_rspin_threshold(struct rw_semaphore *sem)
 	return sched_clock() + delta;
 }
 
+struct rwsem_break_arg {
+	u64 timeout;
+	int loopcnt;
+};
+
+static bool rwsem_osq_break(void *brk_arg)
+{
+	struct rwsem_break_arg *arg = brk_arg;
+
+	arg->loopcnt++;
+	/*
+	 * Check sched_clock() only once every 256 iterations.
+	 */
+	if (!(arg->loopcnt++ & 0xff) && (sched_clock() >= arg->timeout))
+		return true;
+	return false;
+}
+
 static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock,
 				  ktime_t timeout)
 {
-	bool taken = false;
+	bool taken = false, locked;
 	int prev_owner_state = OWNER_NULL;
 	int loop = 0;
 	u64 rspin_threshold = 0, curtime;
+	struct rwsem_break_arg break_arg;
 	unsigned long nonspinnable = wlock ? RWSEM_WR_NONSPINNABLE
 					   : RWSEM_RD_NONSPINNABLE;
 
 	preempt_disable();
 
 	/* sem->wait_lock should not be held when doing optimistic spinning */
-	if (!osq_lock(&sem->osq, NULL, NULL))
-		goto done;
+	if (timeout) {
+		break_arg.timeout = ktime_to_ns(timeout);
+		break_arg.loopcnt = 0;
+		locked = osq_lock(&sem->osq, rwsem_osq_break, &break_arg);
+		curtime = sched_clock();
+	} else {
+		locked = osq_lock(&sem->osq, NULL, NULL);
+		curtime = 0;
+	}
 
-	curtime = timeout ? sched_clock() : 0;
+	if (!locked)
+		goto done;
 
 	/*
 	 * Optimistically spin on the owner field and attempt to acquire the
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 15:05 [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Waiman Long
                   ` (3 preceding siblings ...)
  2019-09-11 15:05 ` [PATCH 4/5] locking/rwsem: Enable timeout check when staying in the OSQ Waiman Long
@ 2019-09-11 15:05 ` Waiman Long
  2019-09-11 15:14   ` Matthew Wilcox
                     ` (3 more replies)
  2019-09-12  5:36 ` Hillf Danton
  2019-09-13  1:50 ` [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Dave Chinner
  6 siblings, 4 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-11 15:05 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro, Mike Kravetz
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso, Waiman Long

When allocating a large amount of static hugepages (~500-1500GB) on a
system with large number of CPUs (4, 8 or even 16 sockets), performance
degradation (random multi-second delays) was observed when thousands
of processes are trying to fault in the data into the huge pages. The
likelihood of the delay increases with the number of sockets and hence
the CPUs a system has.  This only happens in the initial setup phase
and will be gone after all the necessary data are faulted in.

These random delays, however, are deemed unacceptable. The cause of
that delay is the long wait time in acquiring the mmap_sem when trying
to share the huge PMDs.

To remove the unacceptable delays, we have to limit the amount of wait
time on the mmap_sem. So the new down_write_timedlock() function is
used to acquire the write lock on the mmap_sem with a timeout value of
10ms which should not cause a perceivable delay. If timeout happens,
the task will abandon its effort to share the PMD and allocate its own
copy instead.

When too many timeouts happens (threshold currently set at 256), the
system may be too large for PMD sharing to be useful without undue delay.
So the sharing will be disabled in this case.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 include/linux/fs.h |  7 +++++++
 mm/hugetlb.c       | 24 +++++++++++++++++++++---
 2 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index 997a530ff4e9..e9d3ad465a6b 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -40,6 +40,7 @@
 #include <linux/fs_types.h>
 #include <linux/build_bug.h>
 #include <linux/stddef.h>
+#include <linux/ktime.h>
 
 #include <asm/byteorder.h>
 #include <uapi/linux/fs.h>
@@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
 	down_write(&mapping->i_mmap_rwsem);
 }
 
+static inline bool i_mmap_timedlock_write(struct address_space *mapping,
+					 ktime_t timeout)
+{
+	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
+}
+
 static inline void i_mmap_unlock_write(struct address_space *mapping)
 {
 	up_write(&mapping->i_mmap_rwsem);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6d7296dd11b8..445af661ae29 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
 	}
 }
 
+#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
+
 /*
  * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
  * and returns the corresponding pte. While this is not necessary for the
@@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 	pte_t *spte = NULL;
 	pte_t *pte;
 	spinlock_t *ptl;
+	static atomic_t timeout_cnt;
 
-	if (!vma_shareable(vma, addr))
-		return (pte_t *)pmd_alloc(mm, pud, addr);
+	/*
+	 * Don't share if it is not sharable or locking attempt timed out
+	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
+	 * disabled as it is just too slow.
+	 */
+	if (!vma_shareable(vma, addr) ||
+	   (atomic_read(&timeout_cnt) >= PMD_SHARE_DISABLE_THRESHOLD))
+		goto out_no_share;
+
+	if (!i_mmap_timedlock_write(mapping, ms_to_ktime(10))) {
+		if (atomic_inc_return(&timeout_cnt) ==
+		    PMD_SHARE_DISABLE_THRESHOLD)
+			pr_info("Hugetlbfs PMD sharing disabled because of timeouts!\n");
+		goto out_no_share;
+	}
 
-	i_mmap_lock_write(mapping);
 	vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
 		if (svma == vma)
 			continue;
@@ -4806,6 +4821,9 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 	pte = (pte_t *)pmd_alloc(mm, pud, addr);
 	i_mmap_unlock_write(mapping);
 	return pte;
+
+out_no_share:
+	return (pte_t *)pmd_alloc(mm, pud, addr);
 }
 
 /*
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 15:05 ` [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD Waiman Long
@ 2019-09-11 15:14   ` Matthew Wilcox
  2019-09-11 15:44     ` Waiman Long
  2019-09-11 16:01   ` Qian Cai
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox @ 2019-09-11 15:14 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso

On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
> When allocating a large amount of static hugepages (~500-1500GB) on a
> system with large number of CPUs (4, 8 or even 16 sockets), performance
> degradation (random multi-second delays) was observed when thousands
> of processes are trying to fault in the data into the huge pages. The
> likelihood of the delay increases with the number of sockets and hence
> the CPUs a system has.  This only happens in the initial setup phase
> and will be gone after all the necessary data are faulted in.

Can;t the application just specify MAP_POPULATE?


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 15:14   ` Matthew Wilcox
@ 2019-09-11 15:44     ` Waiman Long
  2019-09-11 17:03       ` Mike Kravetz
  0 siblings, 1 reply; 29+ messages in thread
From: Waiman Long @ 2019-09-11 15:44 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso

On 9/11/19 4:14 PM, Matthew Wilcox wrote:
> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>> When allocating a large amount of static hugepages (~500-1500GB) on a
>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>> degradation (random multi-second delays) was observed when thousands
>> of processes are trying to fault in the data into the huge pages. The
>> likelihood of the delay increases with the number of sockets and hence
>> the CPUs a system has.  This only happens in the initial setup phase
>> and will be gone after all the necessary data are faulted in.
> Can;t the application just specify MAP_POPULATE?

Originally, I thought that this happened in the startup phase when the
pages were faulted in. The problem persists after steady state had been
reached though. Every time you have a new user process created, it will
have its own page table. It is the sharing of the of huge page shared
memory that is causing problem. Of course, it depends on how the
application is written.

Anyway, MAP_POPULATE will not be useful in this case.

Thanks,
Longman



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 15:05 ` [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD Waiman Long
  2019-09-11 15:14   ` Matthew Wilcox
@ 2019-09-11 16:01   ` Qian Cai
  2019-09-11 16:34     ` Waiman Long
  2019-09-11 19:57   ` Matthew Wilcox
  2019-09-12  3:26   ` Mike Kravetz
  3 siblings, 1 reply; 29+ messages in thread
From: Qian Cai @ 2019-09-11 16:01 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso



> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
> 
> When allocating a large amount of static hugepages (~500-1500GB) on a
> system with large number of CPUs (4, 8 or even 16 sockets), performance
> degradation (random multi-second delays) was observed when thousands
> of processes are trying to fault in the data into the huge pages. The
> likelihood of the delay increases with the number of sockets and hence
> the CPUs a system has.  This only happens in the initial setup phase
> and will be gone after all the necessary data are faulted in.
> 
> These random delays, however, are deemed unacceptable. The cause of
> that delay is the long wait time in acquiring the mmap_sem when trying
> to share the huge PMDs.
> 
> To remove the unacceptable delays, we have to limit the amount of wait
> time on the mmap_sem. So the new down_write_timedlock() function is
> used to acquire the write lock on the mmap_sem with a timeout value of
> 10ms which should not cause a perceivable delay. If timeout happens,
> the task will abandon its effort to share the PMD and allocate its own
> copy instead.
> 
> When too many timeouts happens (threshold currently set at 256), the
> system may be too large for PMD sharing to be useful without undue delay.
> So the sharing will be disabled in this case.
> 
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
> include/linux/fs.h |  7 +++++++
> mm/hugetlb.c       | 24 +++++++++++++++++++++---
> 2 files changed, 28 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 997a530ff4e9..e9d3ad465a6b 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -40,6 +40,7 @@
> #include <linux/fs_types.h>
> #include <linux/build_bug.h>
> #include <linux/stddef.h>
> +#include <linux/ktime.h>
> 
> #include <asm/byteorder.h>
> #include <uapi/linux/fs.h>
> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
> 	down_write(&mapping->i_mmap_rwsem);
> }
> 
> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
> +					 ktime_t timeout)
> +{
> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
> +}
> +
> static inline void i_mmap_unlock_write(struct address_space *mapping)
> {
> 	up_write(&mapping->i_mmap_rwsem);
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6d7296dd11b8..445af661ae29 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
> 	}
> }
> 
> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
> +
> /*
>  * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>  * and returns the corresponding pte. While this is not necessary for the
> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
> 	pte_t *spte = NULL;
> 	pte_t *pte;
> 	spinlock_t *ptl;
> +	static atomic_t timeout_cnt;
> 
> -	if (!vma_shareable(vma, addr))
> -		return (pte_t *)pmd_alloc(mm, pud, addr);
> +	/*
> +	 * Don't share if it is not sharable or locking attempt timed out
> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
> +	 * disabled as it is just too slow.

It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
anyway) could introduce tricky issues due to different timings on a debug kernel.

> +	 */
> +	if (!vma_shareable(vma, addr) ||
> +	   (atomic_read(&timeout_cnt) >= PMD_SHARE_DISABLE_THRESHOLD))
> +		goto out_no_share;
> +
> +	if (!i_mmap_timedlock_write(mapping, ms_to_ktime(10))) {
> +		if (atomic_inc_return(&timeout_cnt) ==
> +		    PMD_SHARE_DISABLE_THRESHOLD)
> +			pr_info("Hugetlbfs PMD sharing disabled because of timeouts!\n");
> +		goto out_no_share;
> +	}
> 
> -	i_mmap_lock_write(mapping);
> 	vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
> 		if (svma == vma)
> 			continue;
> @@ -4806,6 +4821,9 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
> 	pte = (pte_t *)pmd_alloc(mm, pud, addr);
> 	i_mmap_unlock_write(mapping);
> 	return pte;
> +
> +out_no_share:
> +	return (pte_t *)pmd_alloc(mm, pud, addr);
> }
> 
> /*
> -- 
> 2.18.1
> 
> 



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 16:01   ` Qian Cai
@ 2019-09-11 16:34     ` Waiman Long
  2019-09-11 19:42       ` Qian Cai
  0 siblings, 1 reply; 29+ messages in thread
From: Waiman Long @ 2019-09-11 16:34 UTC (permalink / raw)
  To: Qian Cai
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso

On 9/11/19 5:01 PM, Qian Cai wrote:
>
>> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
>>
>> When allocating a large amount of static hugepages (~500-1500GB) on a
>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>> degradation (random multi-second delays) was observed when thousands
>> of processes are trying to fault in the data into the huge pages. The
>> likelihood of the delay increases with the number of sockets and hence
>> the CPUs a system has.  This only happens in the initial setup phase
>> and will be gone after all the necessary data are faulted in.
>>
>> These random delays, however, are deemed unacceptable. The cause of
>> that delay is the long wait time in acquiring the mmap_sem when trying
>> to share the huge PMDs.
>>
>> To remove the unacceptable delays, we have to limit the amount of wait
>> time on the mmap_sem. So the new down_write_timedlock() function is
>> used to acquire the write lock on the mmap_sem with a timeout value of
>> 10ms which should not cause a perceivable delay. If timeout happens,
>> the task will abandon its effort to share the PMD and allocate its own
>> copy instead.
>>
>> When too many timeouts happens (threshold currently set at 256), the
>> system may be too large for PMD sharing to be useful without undue delay.
>> So the sharing will be disabled in this case.
>>
>> Signed-off-by: Waiman Long <longman@redhat.com>
>> ---
>> include/linux/fs.h |  7 +++++++
>> mm/hugetlb.c       | 24 +++++++++++++++++++++---
>> 2 files changed, 28 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>> index 997a530ff4e9..e9d3ad465a6b 100644
>> --- a/include/linux/fs.h
>> +++ b/include/linux/fs.h
>> @@ -40,6 +40,7 @@
>> #include <linux/fs_types.h>
>> #include <linux/build_bug.h>
>> #include <linux/stddef.h>
>> +#include <linux/ktime.h>
>>
>> #include <asm/byteorder.h>
>> #include <uapi/linux/fs.h>
>> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
>> 	down_write(&mapping->i_mmap_rwsem);
>> }
>>
>> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
>> +					 ktime_t timeout)
>> +{
>> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
>> +}
>> +
>> static inline void i_mmap_unlock_write(struct address_space *mapping)
>> {
>> 	up_write(&mapping->i_mmap_rwsem);
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 6d7296dd11b8..445af661ae29 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>> 	}
>> }
>>
>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>> +
>> /*
>>  * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>  * and returns the corresponding pte. While this is not necessary for the
>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>> 	pte_t *spte = NULL;
>> 	pte_t *pte;
>> 	spinlock_t *ptl;
>> +	static atomic_t timeout_cnt;
>>
>> -	if (!vma_shareable(vma, addr))
>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>> +	/*
>> +	 * Don't share if it is not sharable or locking attempt timed out
>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>> +	 * disabled as it is just too slow.
> It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
> anyway) could introduce tricky issues due to different timings on a debug kernel.

With respect to lockdep, down_write_timedlock() works like a trylock. So
a lot of checking will be skipped. Also the lockdep code won't be run
until the lock is acquired. So its execution time has no effect on the
timeout.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 15:44     ` Waiman Long
@ 2019-09-11 17:03       ` Mike Kravetz
  2019-09-11 17:15         ` Waiman Long
  0 siblings, 1 reply; 29+ messages in thread
From: Mike Kravetz @ 2019-09-11 17:03 UTC (permalink / raw)
  To: Waiman Long, Matthew Wilcox
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso

On 9/11/19 8:44 AM, Waiman Long wrote:
> On 9/11/19 4:14 PM, Matthew Wilcox wrote:
>> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>> degradation (random multi-second delays) was observed when thousands
>>> of processes are trying to fault in the data into the huge pages. The
>>> likelihood of the delay increases with the number of sockets and hence
>>> the CPUs a system has.  This only happens in the initial setup phase
>>> and will be gone after all the necessary data are faulted in.
>> Can;t the application just specify MAP_POPULATE?
> 
> Originally, I thought that this happened in the startup phase when the
> pages were faulted in. The problem persists after steady state had been
> reached though. Every time you have a new user process created, it will
> have its own page table.

This is still at fault time.  Although, for the particular application it
may be after the 'startup phase'.

>                          It is the sharing of the of huge page shared
> memory that is causing problem. Of course, it depends on how the
> application is written.

It may be the case that some applications would find the delays acceptable
for the benefit of shared pmds once they reach steady state.  As you say, of
course this depends on how the application is written.

I know that Oracle DB would not like it if PMD sharing is disabled for them.
Based on what I know of their model, all processes which share PMDs perform
faults (write or read) during the startup phase.  This is in environments as
big or bigger than you describe above.  I have never looked at/for delays in
these environments around pmd sharing (page faults), but that does not mean
they do not exist.  I will try to get the DB group to give me access to one
of their large environments for analysis.

We may want to consider making the timeout value and disable threshold user
configurable.
-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 17:03       ` Mike Kravetz
@ 2019-09-11 17:15         ` Waiman Long
  2019-09-11 17:22           ` Qian Cai
  2019-09-11 17:28           ` Waiman Long
  0 siblings, 2 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-11 17:15 UTC (permalink / raw)
  To: Mike Kravetz, Matthew Wilcox
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso

On 9/11/19 6:03 PM, Mike Kravetz wrote:
> On 9/11/19 8:44 AM, Waiman Long wrote:
>> On 9/11/19 4:14 PM, Matthew Wilcox wrote:
>>> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>> degradation (random multi-second delays) was observed when thousands
>>>> of processes are trying to fault in the data into the huge pages. The
>>>> likelihood of the delay increases with the number of sockets and hence
>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>> and will be gone after all the necessary data are faulted in.
>>> Can;t the application just specify MAP_POPULATE?
>> Originally, I thought that this happened in the startup phase when the
>> pages were faulted in. The problem persists after steady state had been
>> reached though. Every time you have a new user process created, it will
>> have its own page table.
> This is still at fault time.  Although, for the particular application it
> may be after the 'startup phase'.
>
>>                          It is the sharing of the of huge page shared
>> memory that is causing problem. Of course, it depends on how the
>> application is written.
> It may be the case that some applications would find the delays acceptable
> for the benefit of shared pmds once they reach steady state.  As you say, of
> course this depends on how the application is written.
>
> I know that Oracle DB would not like it if PMD sharing is disabled for them.
> Based on what I know of their model, all processes which share PMDs perform
> faults (write or read) during the startup phase.  This is in environments as
> big or bigger than you describe above.  I have never looked at/for delays in
> these environments around pmd sharing (page faults), but that does not mean
> they do not exist.  I will try to get the DB group to give me access to one
> of their large environments for analysis.
>
> We may want to consider making the timeout value and disable threshold user
> configurable.

Making it configurable is certainly doable. They can be sysctl
parameters so that the users can reenable PMD sharing by making those
parameters larger.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 17:15         ` Waiman Long
@ 2019-09-11 17:22           ` Qian Cai
  2019-09-11 17:28           ` Waiman Long
  1 sibling, 0 replies; 29+ messages in thread
From: Qian Cai @ 2019-09-11 17:22 UTC (permalink / raw)
  To: Waiman Long
  Cc: Mike Kravetz, Matthew Wilcox, Peter Zijlstra, Ingo Molnar,
	Will Deacon, Alexander Viro, linux-kernel, linux-fsdevel,
	linux-mm, Davidlohr Bueso



> On Sep 11, 2019, at 1:15 PM, Waiman Long <longman@redhat.com> wrote:
> 
> On 9/11/19 6:03 PM, Mike Kravetz wrote:
>> On 9/11/19 8:44 AM, Waiman Long wrote:
>>> On 9/11/19 4:14 PM, Matthew Wilcox wrote:
>>>> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>>> degradation (random multi-second delays) was observed when thousands
>>>>> of processes are trying to fault in the data into the huge pages. The
>>>>> likelihood of the delay increases with the number of sockets and hence
>>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>>> and will be gone after all the necessary data are faulted in.
>>>> Can;t the application just specify MAP_POPULATE?
>>> Originally, I thought that this happened in the startup phase when the
>>> pages were faulted in. The problem persists after steady state had been
>>> reached though. Every time you have a new user process created, it will
>>> have its own page table.
>> This is still at fault time.  Although, for the particular application it
>> may be after the 'startup phase'.
>> 
>>>                         It is the sharing of the of huge page shared
>>> memory that is causing problem. Of course, it depends on how the
>>> application is written.
>> It may be the case that some applications would find the delays acceptable
>> for the benefit of shared pmds once they reach steady state.  As you say, of
>> course this depends on how the application is written.
>> 
>> I know that Oracle DB would not like it if PMD sharing is disabled for them.
>> Based on what I know of their model, all processes which share PMDs perform
>> faults (write or read) during the startup phase.  This is in environments as
>> big or bigger than you describe above.  I have never looked at/for delays in
>> these environments around pmd sharing (page faults), but that does not mean
>> they do not exist.  I will try to get the DB group to give me access to one
>> of their large environments for analysis.
>> 
>> We may want to consider making the timeout value and disable threshold user
>> configurable.
> 
> Making it configurable is certainly doable. They can be sysctl
> parameters so that the users can reenable PMD sharing by making those
> parameters larger.

It could be a Kconfig option, so people don’t need to change the setting every time
after reinstalling the system. There are times people don’t care too much
about those random multi-second delays. For example, running a debug kernel.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 17:15         ` Waiman Long
  2019-09-11 17:22           ` Qian Cai
@ 2019-09-11 17:28           ` Waiman Long
  1 sibling, 0 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-11 17:28 UTC (permalink / raw)
  To: Mike Kravetz, Matthew Wilcox
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso

On 9/11/19 6:15 PM, Waiman Long wrote:
> On 9/11/19 6:03 PM, Mike Kravetz wrote:
>> On 9/11/19 8:44 AM, Waiman Long wrote:
>>> On 9/11/19 4:14 PM, Matthew Wilcox wrote:
>>>> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>>> degradation (random multi-second delays) was observed when thousands
>>>>> of processes are trying to fault in the data into the huge pages. The
>>>>> likelihood of the delay increases with the number of sockets and hence
>>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>>> and will be gone after all the necessary data are faulted in.
>>>> Can;t the application just specify MAP_POPULATE?
>>> Originally, I thought that this happened in the startup phase when the
>>> pages were faulted in. The problem persists after steady state had been
>>> reached though. Every time you have a new user process created, it will
>>> have its own page table.
>> This is still at fault time.  Although, for the particular application it
>> may be after the 'startup phase'.
>>
>>>                          It is the sharing of the of huge page shared
>>> memory that is causing problem. Of course, it depends on how the
>>> application is written.
>> It may be the case that some applications would find the delays acceptable
>> for the benefit of shared pmds once they reach steady state.  As you say, of
>> course this depends on how the application is written.
>>
>> I know that Oracle DB would not like it if PMD sharing is disabled for them.
>> Based on what I know of their model, all processes which share PMDs perform
>> faults (write or read) during the startup phase.  This is in environments as
>> big or bigger than you describe above.  I have never looked at/for delays in
>> these environments around pmd sharing (page faults), but that does not mean
>> they do not exist.  I will try to get the DB group to give me access to one
>> of their large environments for analysis.
>>
>> We may want to consider making the timeout value and disable threshold user
>> configurable.
> Making it configurable is certainly doable. They can be sysctl
> parameters so that the users can reenable PMD sharing by making those
> parameters larger.

I suspect that the customer's application may be generating a new
process with its own address space for each transaction. That will be
causing a lot of PMD sharing operations when hundreds of threads are
pounding it simultaneously. I had inserted some instrumentation code to
a test kernel that the customers used for testing, the number of
timeouts after a certain time went up more than 20k.

On the other hands, if the application is structured in such a way that
there is limited number of separate address spaces with worker threads
processing the transaction, PMD sharing will be less of a problem. It
will be hard to convince users to make such a structural changes to
their application.

Cheers,
Longman




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 16:34     ` Waiman Long
@ 2019-09-11 19:42       ` Qian Cai
  2019-09-11 20:54         ` Waiman Long
  0 siblings, 1 reply; 29+ messages in thread
From: Qian Cai @ 2019-09-11 19:42 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso



> On Sep 11, 2019, at 12:34 PM, Waiman Long <longman@redhat.com> wrote:
> 
> On 9/11/19 5:01 PM, Qian Cai wrote:
>> 
>>> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
>>> 
>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>> degradation (random multi-second delays) was observed when thousands
>>> of processes are trying to fault in the data into the huge pages. The
>>> likelihood of the delay increases with the number of sockets and hence
>>> the CPUs a system has.  This only happens in the initial setup phase
>>> and will be gone after all the necessary data are faulted in.
>>> 
>>> These random delays, however, are deemed unacceptable. The cause of
>>> that delay is the long wait time in acquiring the mmap_sem when trying
>>> to share the huge PMDs.
>>> 
>>> To remove the unacceptable delays, we have to limit the amount of wait
>>> time on the mmap_sem. So the new down_write_timedlock() function is
>>> used to acquire the write lock on the mmap_sem with a timeout value of
>>> 10ms which should not cause a perceivable delay. If timeout happens,
>>> the task will abandon its effort to share the PMD and allocate its own
>>> copy instead.
>>> 
>>> When too many timeouts happens (threshold currently set at 256), the
>>> system may be too large for PMD sharing to be useful without undue delay.
>>> So the sharing will be disabled in this case.
>>> 
>>> Signed-off-by: Waiman Long <longman@redhat.com>
>>> ---
>>> include/linux/fs.h |  7 +++++++
>>> mm/hugetlb.c       | 24 +++++++++++++++++++++---
>>> 2 files changed, 28 insertions(+), 3 deletions(-)
>>> 
>>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>>> index 997a530ff4e9..e9d3ad465a6b 100644
>>> --- a/include/linux/fs.h
>>> +++ b/include/linux/fs.h
>>> @@ -40,6 +40,7 @@
>>> #include <linux/fs_types.h>
>>> #include <linux/build_bug.h>
>>> #include <linux/stddef.h>
>>> +#include <linux/ktime.h>
>>> 
>>> #include <asm/byteorder.h>
>>> #include <uapi/linux/fs.h>
>>> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
>>> 	down_write(&mapping->i_mmap_rwsem);
>>> }
>>> 
>>> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
>>> +					 ktime_t timeout)
>>> +{
>>> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
>>> +}
>>> +
>>> static inline void i_mmap_unlock_write(struct address_space *mapping)
>>> {
>>> 	up_write(&mapping->i_mmap_rwsem);
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index 6d7296dd11b8..445af661ae29 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>>> 	}
>>> }
>>> 
>>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>>> +
>>> /*
>>> * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>> * and returns the corresponding pte. While this is not necessary for the
>>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>>> 	pte_t *spte = NULL;
>>> 	pte_t *pte;
>>> 	spinlock_t *ptl;
>>> +	static atomic_t timeout_cnt;
>>> 
>>> -	if (!vma_shareable(vma, addr))
>>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>>> +	/*
>>> +	 * Don't share if it is not sharable or locking attempt timed out
>>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>>> +	 * disabled as it is just too slow.
>> It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
>> anyway) could introduce tricky issues due to different timings on a debug kernel.
> 
> With respect to lockdep, down_write_timedlock() works like a trylock. So
> a lot of checking will be skipped. Also the lockdep code won't be run
> until the lock is acquired. So its execution time has no effect on the
> timeout.

No only lockdep, but also things like KASAN, debug_pagealloc, page_poison, kmemleak, debug
objects etc that  all going to slow down things in huge_pmd_share(), and make it tricky to get a
right timeout value for those debug kernels without changing the previous behavior.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 15:05 ` [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD Waiman Long
  2019-09-11 15:14   ` Matthew Wilcox
  2019-09-11 16:01   ` Qian Cai
@ 2019-09-11 19:57   ` Matthew Wilcox
  2019-09-11 20:51     ` Waiman Long
  2019-09-12  3:26   ` Mike Kravetz
  3 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox @ 2019-09-11 19:57 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso

On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
> To remove the unacceptable delays, we have to limit the amount of wait
> time on the mmap_sem. So the new down_write_timedlock() function is
> used to acquire the write lock on the mmap_sem with a timeout value of
> 10ms which should not cause a perceivable delay. If timeout happens,
> the task will abandon its effort to share the PMD and allocate its own
> copy instead.

If you do a v2, this is *NOT* the mmap_sem.  It's the i_mmap_rwsem
which protects a very different data structure from the mmap_sem.

> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
> +					 ktime_t timeout)
> +{
> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
> +}


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 19:57   ` Matthew Wilcox
@ 2019-09-11 20:51     ` Waiman Long
  0 siblings, 0 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-11 20:51 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso

On 9/11/19 8:57 PM, Matthew Wilcox wrote:
> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>> To remove the unacceptable delays, we have to limit the amount of wait
>> time on the mmap_sem. So the new down_write_timedlock() function is
>> used to acquire the write lock on the mmap_sem with a timeout value of
>> 10ms which should not cause a perceivable delay. If timeout happens,
>> the task will abandon its effort to share the PMD and allocate its own
>> copy instead.
> If you do a v2, this is *NOT* the mmap_sem.  It's the i_mmap_rwsem
> which protects a very different data structure from the mmap_sem.
>
Thanks for reminder. I should have read the code more carefully.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 19:42       ` Qian Cai
@ 2019-09-11 20:54         ` Waiman Long
  2019-09-11 21:57           ` Qian Cai
  0 siblings, 1 reply; 29+ messages in thread
From: Waiman Long @ 2019-09-11 20:54 UTC (permalink / raw)
  To: Qian Cai
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso

On 9/11/19 8:42 PM, Qian Cai wrote:
>
>> On Sep 11, 2019, at 12:34 PM, Waiman Long <longman@redhat.com> wrote:
>>
>> On 9/11/19 5:01 PM, Qian Cai wrote:
>>>> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
>>>>
>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>> degradation (random multi-second delays) was observed when thousands
>>>> of processes are trying to fault in the data into the huge pages. The
>>>> likelihood of the delay increases with the number of sockets and hence
>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>> and will be gone after all the necessary data are faulted in.
>>>>
>>>> These random delays, however, are deemed unacceptable. The cause of
>>>> that delay is the long wait time in acquiring the mmap_sem when trying
>>>> to share the huge PMDs.
>>>>
>>>> To remove the unacceptable delays, we have to limit the amount of wait
>>>> time on the mmap_sem. So the new down_write_timedlock() function is
>>>> used to acquire the write lock on the mmap_sem with a timeout value of
>>>> 10ms which should not cause a perceivable delay. If timeout happens,
>>>> the task will abandon its effort to share the PMD and allocate its own
>>>> copy instead.
>>>>
>>>> When too many timeouts happens (threshold currently set at 256), the
>>>> system may be too large for PMD sharing to be useful without undue delay.
>>>> So the sharing will be disabled in this case.
>>>>
>>>> Signed-off-by: Waiman Long <longman@redhat.com>
>>>> ---
>>>> include/linux/fs.h |  7 +++++++
>>>> mm/hugetlb.c       | 24 +++++++++++++++++++++---
>>>> 2 files changed, 28 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>>>> index 997a530ff4e9..e9d3ad465a6b 100644
>>>> --- a/include/linux/fs.h
>>>> +++ b/include/linux/fs.h
>>>> @@ -40,6 +40,7 @@
>>>> #include <linux/fs_types.h>
>>>> #include <linux/build_bug.h>
>>>> #include <linux/stddef.h>
>>>> +#include <linux/ktime.h>
>>>>
>>>> #include <asm/byteorder.h>
>>>> #include <uapi/linux/fs.h>
>>>> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
>>>> 	down_write(&mapping->i_mmap_rwsem);
>>>> }
>>>>
>>>> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
>>>> +					 ktime_t timeout)
>>>> +{
>>>> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
>>>> +}
>>>> +
>>>> static inline void i_mmap_unlock_write(struct address_space *mapping)
>>>> {
>>>> 	up_write(&mapping->i_mmap_rwsem);
>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>> index 6d7296dd11b8..445af661ae29 100644
>>>> --- a/mm/hugetlb.c
>>>> +++ b/mm/hugetlb.c
>>>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>>>> 	}
>>>> }
>>>>
>>>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>>>> +
>>>> /*
>>>> * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>>> * and returns the corresponding pte. While this is not necessary for the
>>>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>>>> 	pte_t *spte = NULL;
>>>> 	pte_t *pte;
>>>> 	spinlock_t *ptl;
>>>> +	static atomic_t timeout_cnt;
>>>>
>>>> -	if (!vma_shareable(vma, addr))
>>>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>>>> +	/*
>>>> +	 * Don't share if it is not sharable or locking attempt timed out
>>>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>>>> +	 * disabled as it is just too slow.
>>> It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
>>> anyway) could introduce tricky issues due to different timings on a debug kernel.
>> With respect to lockdep, down_write_timedlock() works like a trylock. So
>> a lot of checking will be skipped. Also the lockdep code won't be run
>> until the lock is acquired. So its execution time has no effect on the
>> timeout.
> No only lockdep, but also things like KASAN, debug_pagealloc, page_poison, kmemleak, debug
> objects etc that  all going to slow down things in huge_pmd_share(), and make it tricky to get a
> right timeout value for those debug kernels without changing the previous behavior.

Right, I understand that. I will move to use a sysctl parameters for the
timeout and then set its default value to either 10ms or 20ms if some
debug options are detected. Usually the slower than should not be more
than 2X.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 20:54         ` Waiman Long
@ 2019-09-11 21:57           ` Qian Cai
  0 siblings, 0 replies; 29+ messages in thread
From: Qian Cai @ 2019-09-11 21:57 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso



> On Sep 11, 2019, at 4:54 PM, Waiman Long <longman@redhat.com> wrote:
> 
> On 9/11/19 8:42 PM, Qian Cai wrote:
>> 
>>> On Sep 11, 2019, at 12:34 PM, Waiman Long <longman@redhat.com> wrote:
>>> 
>>> On 9/11/19 5:01 PM, Qian Cai wrote:
>>>>> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
>>>>> 
>>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>>> degradation (random multi-second delays) was observed when thousands
>>>>> of processes are trying to fault in the data into the huge pages. The
>>>>> likelihood of the delay increases with the number of sockets and hence
>>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>>> and will be gone after all the necessary data are faulted in.
>>>>> 
>>>>> These random delays, however, are deemed unacceptable. The cause of
>>>>> that delay is the long wait time in acquiring the mmap_sem when trying
>>>>> to share the huge PMDs.
>>>>> 
>>>>> To remove the unacceptable delays, we have to limit the amount of wait
>>>>> time on the mmap_sem. So the new down_write_timedlock() function is
>>>>> used to acquire the write lock on the mmap_sem with a timeout value of
>>>>> 10ms which should not cause a perceivable delay. If timeout happens,
>>>>> the task will abandon its effort to share the PMD and allocate its own
>>>>> copy instead.
>>>>> 
>>>>> When too many timeouts happens (threshold currently set at 256), the
>>>>> system may be too large for PMD sharing to be useful without undue delay.
>>>>> So the sharing will be disabled in this case.
>>>>> 
>>>>> Signed-off-by: Waiman Long <longman@redhat.com>
>>>>> ---
>>>>> include/linux/fs.h |  7 +++++++
>>>>> mm/hugetlb.c       | 24 +++++++++++++++++++++---
>>>>> 2 files changed, 28 insertions(+), 3 deletions(-)
>>>>> 
>>>>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>>>>> index 997a530ff4e9..e9d3ad465a6b 100644
>>>>> --- a/include/linux/fs.h
>>>>> +++ b/include/linux/fs.h
>>>>> @@ -40,6 +40,7 @@
>>>>> #include <linux/fs_types.h>
>>>>> #include <linux/build_bug.h>
>>>>> #include <linux/stddef.h>
>>>>> +#include <linux/ktime.h>
>>>>> 
>>>>> #include <asm/byteorder.h>
>>>>> #include <uapi/linux/fs.h>
>>>>> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
>>>>> 	down_write(&mapping->i_mmap_rwsem);
>>>>> }
>>>>> 
>>>>> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
>>>>> +					 ktime_t timeout)
>>>>> +{
>>>>> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
>>>>> +}
>>>>> +
>>>>> static inline void i_mmap_unlock_write(struct address_space *mapping)
>>>>> {
>>>>> 	up_write(&mapping->i_mmap_rwsem);
>>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>>> index 6d7296dd11b8..445af661ae29 100644
>>>>> --- a/mm/hugetlb.c
>>>>> +++ b/mm/hugetlb.c
>>>>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>>>>> 	}
>>>>> }
>>>>> 
>>>>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>>>>> +
>>>>> /*
>>>>> * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>>>> * and returns the corresponding pte. While this is not necessary for the
>>>>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>>>>> 	pte_t *spte = NULL;
>>>>> 	pte_t *pte;
>>>>> 	spinlock_t *ptl;
>>>>> +	static atomic_t timeout_cnt;
>>>>> 
>>>>> -	if (!vma_shareable(vma, addr))
>>>>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>>>>> +	/*
>>>>> +	 * Don't share if it is not sharable or locking attempt timed out
>>>>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>>>>> +	 * disabled as it is just too slow.
>>>> It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
>>>> anyway) could introduce tricky issues due to different timings on a debug kernel.
>>> With respect to lockdep, down_write_timedlock() works like a trylock. So
>>> a lot of checking will be skipped. Also the lockdep code won't be run
>>> until the lock is acquired. So its execution time has no effect on the
>>> timeout.
>> No only lockdep, but also things like KASAN, debug_pagealloc, page_poison, kmemleak, debug
>> objects etc that  all going to slow down things in huge_pmd_share(), and make it tricky to get a
>> right timeout value for those debug kernels without changing the previous behavior.
> 
> Right, I understand that. I will move to use a sysctl parameters for the
> timeout and then set its default value to either 10ms or 20ms if some
> debug options are detected. Usually the slower than should not be more
> than 2X.

That 2X is another magic number which has no testing data back for it. We need a way to disable timeout
completely in Kconfig, so it can ship in the part of a debug kernel package.




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 15:05 ` [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD Waiman Long
                     ` (2 preceding siblings ...)
  2019-09-11 19:57   ` Matthew Wilcox
@ 2019-09-12  3:26   ` Mike Kravetz
  2019-09-12  3:41     ` Matthew Wilcox
  2019-09-12  9:06     ` Waiman Long
  3 siblings, 2 replies; 29+ messages in thread
From: Mike Kravetz @ 2019-09-12  3:26 UTC (permalink / raw)
  To: Waiman Long, Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso

On 9/11/19 8:05 AM, Waiman Long wrote:
> When allocating a large amount of static hugepages (~500-1500GB) on a
> system with large number of CPUs (4, 8 or even 16 sockets), performance
> degradation (random multi-second delays) was observed when thousands
> of processes are trying to fault in the data into the huge pages. The
> likelihood of the delay increases with the number of sockets and hence
> the CPUs a system has.  This only happens in the initial setup phase
> and will be gone after all the necessary data are faulted in.
> 
> These random delays, however, are deemed unacceptable. The cause of
> that delay is the long wait time in acquiring the mmap_sem when trying
> to share the huge PMDs.
> 
> To remove the unacceptable delays, we have to limit the amount of wait
> time on the mmap_sem. So the new down_write_timedlock() function is
> used to acquire the write lock on the mmap_sem with a timeout value of
> 10ms which should not cause a perceivable delay. If timeout happens,
> the task will abandon its effort to share the PMD and allocate its own
> copy instead.
> 
<snip>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6d7296dd11b8..445af661ae29 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>  	}
>  }
>  
> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
> +
>  /*
>   * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>   * and returns the corresponding pte. While this is not necessary for the
> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>  	pte_t *spte = NULL;
>  	pte_t *pte;
>  	spinlock_t *ptl;
> +	static atomic_t timeout_cnt;
>  
> -	if (!vma_shareable(vma, addr))
> -		return (pte_t *)pmd_alloc(mm, pud, addr);
> +	/*
> +	 * Don't share if it is not sharable or locking attempt timed out
> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
> +	 * disabled as it is just too slow.
> +	 */
> +	if (!vma_shareable(vma, addr) ||
> +	   (atomic_read(&timeout_cnt) >= PMD_SHARE_DISABLE_THRESHOLD))
> +		goto out_no_share;
> +
> +	if (!i_mmap_timedlock_write(mapping, ms_to_ktime(10))) {
> +		if (atomic_inc_return(&timeout_cnt) ==
> +		    PMD_SHARE_DISABLE_THRESHOLD)
> +			pr_info("Hugetlbfs PMD sharing disabled because of timeouts!\n");
> +		goto out_no_share;
> +	}
>  
> -	i_mmap_lock_write(mapping);

All this got me wondering if we really need to take i_mmap_rwsem in write
mode here.  We are not changing the tree, only traversing it looking for
a suitable vma.

Unless I am missing something, the hugetlb code only ever takes the semaphore
in write mode; never read.  Could this have been the result of changing the
tree semaphore to read/write?  Instead of analyzing all the code, the easiest
and safest thing would have been to take all accesses in write mode.

I can investigate more, but wanted to ask the question in case someone already
knows.

At one time, I thought it was safe to acquire the semaphore in read mode for
huge_pmd_share, but write mode for huge_pmd_unshare.  See commit b43a99900559.
This was reverted along with another patch for other reasons.

If we change change from write to read mode, this may have significant impact
on the stalls.
-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-12  3:26   ` Mike Kravetz
@ 2019-09-12  3:41     ` Matthew Wilcox
  2019-09-12  4:40       ` Davidlohr Bueso
  2019-09-12  9:06     ` Waiman Long
  1 sibling, 1 reply; 29+ messages in thread
From: Matthew Wilcox @ 2019-09-12  3:41 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Waiman Long, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Alexander Viro, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso

On Wed, Sep 11, 2019 at 08:26:52PM -0700, Mike Kravetz wrote:
> All this got me wondering if we really need to take i_mmap_rwsem in write
> mode here.  We are not changing the tree, only traversing it looking for
> a suitable vma.
> 
> Unless I am missing something, the hugetlb code only ever takes the semaphore
> in write mode; never read.  Could this have been the result of changing the
> tree semaphore to read/write?  Instead of analyzing all the code, the easiest
> and safest thing would have been to take all accesses in write mode.

I was wondering the same thing.  It was changed here:

commit 83cde9e8ba95d180eaefefe834958fbf7008cf39
Author: Davidlohr Bueso <dave@stgolabs.net>
Date:   Fri Dec 12 16:54:21 2014 -0800

    mm: use new helper functions around the i_mmap_mutex
    
    Convert all open coded mutex_lock/unlock calls to the
    i_mmap_[lock/unlock]_write() helpers.

and a subsequent patch said:

    This conversion is straightforward.  For now, all users take the write
    lock.

There were subsequent patches which changed a few places
c8475d144abb1e62958cc5ec281d2a9e161c1946
1acf2e040721564d579297646862b8ea3dd4511b
d28eb9c861f41aa2af4cfcc5eeeddff42b13d31e
874bfcaf79e39135cd31e1cfc9265cf5222d1ec3
3dec0ba0be6a532cac949e02b853021bf6d57dad

but I don't know why this one wasn't changed.

(I was also wondering about caching a potentially sharable page table
in the address_space to avoid having to walk the VMA tree at all if that
one happened to be sharable).


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-12  3:41     ` Matthew Wilcox
@ 2019-09-12  4:40       ` Davidlohr Bueso
  2019-09-16 13:53         ` Waiman Long
  0 siblings, 1 reply; 29+ messages in thread
From: Davidlohr Bueso @ 2019-09-12  4:40 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Mike Kravetz, Waiman Long, Peter Zijlstra, Ingo Molnar,
	Will Deacon, Alexander Viro, linux-kernel, linux-fsdevel,
	linux-mm

On Wed, 11 Sep 2019, Matthew Wilcox wrote:

>On Wed, Sep 11, 2019 at 08:26:52PM -0700, Mike Kravetz wrote:
>> All this got me wondering if we really need to take i_mmap_rwsem in write
>> mode here.  We are not changing the tree, only traversing it looking for
>> a suitable vma.
>>
>> Unless I am missing something, the hugetlb code only ever takes the semaphore
>> in write mode; never read.  Could this have been the result of changing the
>> tree semaphore to read/write?  Instead of analyzing all the code, the easiest
>> and safest thing would have been to take all accesses in write mode.
>
>I was wondering the same thing.  It was changed here:
>
>commit 83cde9e8ba95d180eaefefe834958fbf7008cf39
>Author: Davidlohr Bueso <dave@stgolabs.net>
>Date:   Fri Dec 12 16:54:21 2014 -0800
>
>    mm: use new helper functions around the i_mmap_mutex
>
>    Convert all open coded mutex_lock/unlock calls to the
>    i_mmap_[lock/unlock]_write() helpers.
>
>and a subsequent patch said:
>
>    This conversion is straightforward.  For now, all users take the write
>    lock.
>
>There were subsequent patches which changed a few places
>c8475d144abb1e62958cc5ec281d2a9e161c1946
>1acf2e040721564d579297646862b8ea3dd4511b
>d28eb9c861f41aa2af4cfcc5eeeddff42b13d31e
>874bfcaf79e39135cd31e1cfc9265cf5222d1ec3
>3dec0ba0be6a532cac949e02b853021bf6d57dad
>
>but I don't know why this one wasn't changed.

I cannot recall why huge_pmd_share() was not changed along with the other
callers that don't modify the interval tree. By looking at the function,
I agree that this could be shared, in fact this lock is much less involved
than it's anon_vma counterpart, last I checked (perhaps with the exception
of take_rmap_locks().

>
>(I was also wondering about caching a potentially sharable page table
>in the address_space to avoid having to walk the VMA tree at all if that
>one happened to be sharable).

I also think that the right solution is within the mm instead of adding
a new api to rwsem and the extra complexity/overhead to osq _just_ for this
case. We've managed to not need timeout extensions in our locking primitives
thus far, which is a good thing imo.

Thanks,
Davidlohr


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-11 15:05 [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Waiman Long
                   ` (4 preceding siblings ...)
  2019-09-11 15:05 ` [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD Waiman Long
@ 2019-09-12  5:36 ` Hillf Danton
  2019-09-13  1:50 ` [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Dave Chinner
  6 siblings, 0 replies; 29+ messages in thread
From: Hillf Danton @ 2019-09-12  5:36 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso


On Wed, 11 Sep 2019 16:05:37 +0100
> 
> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
> +
>  /*
>   * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>   * and returns the corresponding pte. While this is not necessary for the
> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>  	pte_t *spte = NULL;
>  	pte_t *pte;
>  	spinlock_t *ptl;
> +	static atomic_t timeout_cnt;
>  
> -	if (!vma_shareable(vma, addr))
> -		return (pte_t *)pmd_alloc(mm, pud, addr);
> +	/*
> +	 * Don't share if it is not sharable or locking attempt timed out
> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
> +	 * disabled as it is just too slow.
> +	 */
> +	if (!vma_shareable(vma, addr) ||
> +	   (atomic_read(&timeout_cnt) >= PMD_SHARE_DISABLE_THRESHOLD))
> +		goto out_no_share;
> +
> +	if (!i_mmap_timedlock_write(mapping, ms_to_ktime(10))) {
> +		if (atomic_inc_return(&timeout_cnt) ==
> +		    PMD_SHARE_DISABLE_THRESHOLD)
> +			pr_info("Hugetlbfs PMD sharing disabled because of timeouts!\n");
> +		goto out_no_share;
> +	}
	atomic_dec_if_positive(&timeout_cnt);

The logic to permanently disable pmd sharing does not make much sense
without anything like atomic_dec that would have been in their places,
with 256 timeouts put aside.

>  
> -	i_mmap_lock_write(mapping);
>  	vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
>  		if (svma == vma)
>  			continue;
> @@ -4806,6 +4821,9 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>  	pte = (pte_t *)pmd_alloc(mm, pud, addr);
>  	i_mmap_unlock_write(mapping);
>  	return pte;
> +
> +out_no_share:
> +	return (pte_t *)pmd_alloc(mm, pud, addr);
>  }



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-12  3:26   ` Mike Kravetz
  2019-09-12  3:41     ` Matthew Wilcox
@ 2019-09-12  9:06     ` Waiman Long
  2019-09-12 16:43       ` Mike Kravetz
  1 sibling, 1 reply; 29+ messages in thread
From: Waiman Long @ 2019-09-12  9:06 UTC (permalink / raw)
  To: Mike Kravetz, Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso

On 9/12/19 4:26 AM, Mike Kravetz wrote:
> On 9/11/19 8:05 AM, Waiman Long wrote:
>> When allocating a large amount of static hugepages (~500-1500GB) on a
>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>> degradation (random multi-second delays) was observed when thousands
>> of processes are trying to fault in the data into the huge pages. The
>> likelihood of the delay increases with the number of sockets and hence
>> the CPUs a system has.  This only happens in the initial setup phase
>> and will be gone after all the necessary data are faulted in.
>>
>> These random delays, however, are deemed unacceptable. The cause of
>> that delay is the long wait time in acquiring the mmap_sem when trying
>> to share the huge PMDs.
>>
>> To remove the unacceptable delays, we have to limit the amount of wait
>> time on the mmap_sem. So the new down_write_timedlock() function is
>> used to acquire the write lock on the mmap_sem with a timeout value of
>> 10ms which should not cause a perceivable delay. If timeout happens,
>> the task will abandon its effort to share the PMD and allocate its own
>> copy instead.
>>
> <snip>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 6d7296dd11b8..445af661ae29 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>>  	}
>>  }
>>  
>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>> +
>>  /*
>>   * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>   * and returns the corresponding pte. While this is not necessary for the
>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>>  	pte_t *spte = NULL;
>>  	pte_t *pte;
>>  	spinlock_t *ptl;
>> +	static atomic_t timeout_cnt;
>>  
>> -	if (!vma_shareable(vma, addr))
>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>> +	/*
>> +	 * Don't share if it is not sharable or locking attempt timed out
>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>> +	 * disabled as it is just too slow.
>> +	 */
>> +	if (!vma_shareable(vma, addr) ||
>> +	   (atomic_read(&timeout_cnt) >= PMD_SHARE_DISABLE_THRESHOLD))
>> +		goto out_no_share;
>> +
>> +	if (!i_mmap_timedlock_write(mapping, ms_to_ktime(10))) {
>> +		if (atomic_inc_return(&timeout_cnt) ==
>> +		    PMD_SHARE_DISABLE_THRESHOLD)
>> +			pr_info("Hugetlbfs PMD sharing disabled because of timeouts!\n");
>> +		goto out_no_share;
>> +	}
>>  
>> -	i_mmap_lock_write(mapping);
> All this got me wondering if we really need to take i_mmap_rwsem in write
> mode here.  We are not changing the tree, only traversing it looking for
> a suitable vma.
>
> Unless I am missing something, the hugetlb code only ever takes the semaphore
> in write mode; never read.  Could this have been the result of changing the
> tree semaphore to read/write?  Instead of analyzing all the code, the easiest
> and safest thing would have been to take all accesses in write mode.
>
> I can investigate more, but wanted to ask the question in case someone already
> knows.
>
> At one time, I thought it was safe to acquire the semaphore in read mode for
> huge_pmd_share, but write mode for huge_pmd_unshare.  See commit b43a99900559.
> This was reverted along with another patch for other reasons.
>
> If we change change from write to read mode, this may have significant impact
> on the stalls.

If we can take the rwsem in read mode, that should solve the problem
AFAICS. As I don't have a full understanding of the history of that
code, I didn't try to do that in my patch.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-12  9:06     ` Waiman Long
@ 2019-09-12 16:43       ` Mike Kravetz
  2019-09-13 18:23         ` Waiman Long
  0 siblings, 1 reply; 29+ messages in thread
From: Mike Kravetz @ 2019-09-12 16:43 UTC (permalink / raw)
  To: Waiman Long, Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso

On 9/12/19 2:06 AM, Waiman Long wrote:
> If we can take the rwsem in read mode, that should solve the problem
> AFAICS. As I don't have a full understanding of the history of that
> code, I didn't try to do that in my patch.

Do you still have access to an environment that creates the long stalls?
If so, can you try the simple change of taking the semaphore in read mode
in huge_pmd_share.

-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems
  2019-09-11 15:05 [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Waiman Long
                   ` (5 preceding siblings ...)
  2019-09-12  5:36 ` Hillf Danton
@ 2019-09-13  1:50 ` Dave Chinner
  2019-09-25  8:35   ` Peter Zijlstra
  6 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2019-09-13  1:50 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso

On Wed, Sep 11, 2019 at 04:05:32PM +0100, Waiman Long wrote:
> A customer with large SMP systems (up to 16 sockets) with application
> that uses large amount of static hugepages (~500-1500GB) are experiencing
> random multisecond delays. These delays was caused by the long time it
> took to scan the VMA interval tree with mmap_sem held.
> 
> To fix this problem while perserving existing behavior as much as
> possible, we need to allow timeout in down_write() and disabling PMD
> sharing when it is taking too long to do so. Since a transaction can
> involving touching multiple huge pages, timing out for each of the huge
> page interactions does not completely solve the problem. So a threshold
> is set to completely disable PMD sharing if too many timeouts happen.
> 
> The first 4 patches of this 5-patch series adds a new
> down_write_timedlock() API which accepts a timeout argument and return
> true is locking is successful or false otherwise. It works more or less
> than a down_write_trylock() but the calling thread may sleep.

Just on general principle, this is a non-starter. If a lock is being
held too long, then whatever the lock is protecting needs fixing.
Adding timeouts to locks and sysctls to tune them is not a viable
solution to address latencies caused by algorithm scalability
issues.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-12 16:43       ` Mike Kravetz
@ 2019-09-13 18:23         ` Waiman Long
  0 siblings, 0 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-13 18:23 UTC (permalink / raw)
  To: Mike Kravetz, Peter Zijlstra, Ingo Molnar, Will Deacon, Alexander Viro
  Cc: linux-kernel, linux-fsdevel, linux-mm, Davidlohr Bueso

On 9/12/19 5:43 PM, Mike Kravetz wrote:
> On 9/12/19 2:06 AM, Waiman Long wrote:
>> If we can take the rwsem in read mode, that should solve the problem
>> AFAICS. As I don't have a full understanding of the history of that
>> code, I didn't try to do that in my patch.
> Do you still have access to an environment that creates the long stalls?
> If so, can you try the simple change of taking the semaphore in read mode
> in huge_pmd_share.
>
That is what I am planning to do. I don't have an environment to
reproduce the problem myself. I have to create a test kernel and ask the
customer to try it out.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD
  2019-09-12  4:40       ` Davidlohr Bueso
@ 2019-09-16 13:53         ` Waiman Long
  0 siblings, 0 replies; 29+ messages in thread
From: Waiman Long @ 2019-09-16 13:53 UTC (permalink / raw)
  To: Matthew Wilcox, Mike Kravetz, Peter Zijlstra, Ingo Molnar,
	Will Deacon, Alexander Viro, linux-kernel, linux-fsdevel,
	linux-mm

On 9/12/19 12:40 AM, Davidlohr Bueso wrote:
>
> I also think that the right solution is within the mm instead of adding
> a new api to rwsem and the extra complexity/overhead to osq _just_ for
> this
> case. We've managed to not need timeout extensions in our locking
> primitives
> thus far, which is a good thing imo. 

Adding a variant with timeout can be useful in resolving some potential
deadlock issues found by lockdep. Anyway, there were talk about merging
rt-mutex and regular mutex in the LPC last week. So we will need to have
mutex_lock() variant with timeout for that to happen.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems
  2019-09-13  1:50 ` [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Dave Chinner
@ 2019-09-25  8:35   ` Peter Zijlstra
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Zijlstra @ 2019-09-25  8:35 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Waiman Long, Ingo Molnar, Will Deacon, Alexander Viro,
	Mike Kravetz, linux-kernel, linux-fsdevel, linux-mm,
	Davidlohr Bueso

On Fri, Sep 13, 2019 at 11:50:43AM +1000, Dave Chinner wrote:
> On Wed, Sep 11, 2019 at 04:05:32PM +0100, Waiman Long wrote:
> > A customer with large SMP systems (up to 16 sockets) with application
> > that uses large amount of static hugepages (~500-1500GB) are experiencing
> > random multisecond delays. These delays was caused by the long time it
> > took to scan the VMA interval tree with mmap_sem held.
> > 
> > To fix this problem while perserving existing behavior as much as
> > possible, we need to allow timeout in down_write() and disabling PMD
> > sharing when it is taking too long to do so. Since a transaction can
> > involving touching multiple huge pages, timing out for each of the huge
> > page interactions does not completely solve the problem. So a threshold
> > is set to completely disable PMD sharing if too many timeouts happen.
> > 
> > The first 4 patches of this 5-patch series adds a new
> > down_write_timedlock() API which accepts a timeout argument and return
> > true is locking is successful or false otherwise. It works more or less
> > than a down_write_trylock() but the calling thread may sleep.
> 
> Just on general principle, this is a non-starter. If a lock is being
> held too long, then whatever the lock is protecting needs fixing.
> Adding timeouts to locks and sysctls to tune them is not a viable
> solution to address latencies caused by algorithm scalability
> issues.

I'm very much agreeing here. Lock functions with timeouts are a sign of
horrific design.


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2019-09-25  8:36 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-11 15:05 [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Waiman Long
2019-09-11 15:05 ` [PATCH 1/5] locking/rwsem: Add down_write_timedlock() Waiman Long
2019-09-11 15:05 ` [PATCH 2/5] locking/rwsem: Enable timeout check when spinning on owner Waiman Long
2019-09-11 15:05 ` [PATCH 3/5] locking/osq: Allow early break from OSQ Waiman Long
2019-09-11 15:05 ` [PATCH 4/5] locking/rwsem: Enable timeout check when staying in the OSQ Waiman Long
2019-09-11 15:05 ` [PATCH 5/5] hugetlbfs: Limit wait time when trying to share huge PMD Waiman Long
2019-09-11 15:14   ` Matthew Wilcox
2019-09-11 15:44     ` Waiman Long
2019-09-11 17:03       ` Mike Kravetz
2019-09-11 17:15         ` Waiman Long
2019-09-11 17:22           ` Qian Cai
2019-09-11 17:28           ` Waiman Long
2019-09-11 16:01   ` Qian Cai
2019-09-11 16:34     ` Waiman Long
2019-09-11 19:42       ` Qian Cai
2019-09-11 20:54         ` Waiman Long
2019-09-11 21:57           ` Qian Cai
2019-09-11 19:57   ` Matthew Wilcox
2019-09-11 20:51     ` Waiman Long
2019-09-12  3:26   ` Mike Kravetz
2019-09-12  3:41     ` Matthew Wilcox
2019-09-12  4:40       ` Davidlohr Bueso
2019-09-16 13:53         ` Waiman Long
2019-09-12  9:06     ` Waiman Long
2019-09-12 16:43       ` Mike Kravetz
2019-09-13 18:23         ` Waiman Long
2019-09-12  5:36 ` Hillf Danton
2019-09-13  1:50 ` [PATCH 0/5] hugetlbfs: Disable PMD sharing for large systems Dave Chinner
2019-09-25  8:35   ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).