linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] rwsem: Implement down_read_killable()
@ 2017-06-19 18:02 Kirill Tkhai
  2017-06-19 18:02 ` [PATCH 1/7] rwsem-spinlock: Add killable versions of __down_read() Kirill Tkhai
                   ` (8 more replies)
  0 siblings, 9 replies; 15+ messages in thread
From: Kirill Tkhai @ 2017-06-19 18:02 UTC (permalink / raw)
  To: linux-ia64, avagin, peterz, heiko.carstens, hpa, gorcunov,
	linux-arch, linux-s390, x86, mingo, mattst88, fenghua.yu, arnd,
	ktkhai, ink, tglx, rth, tony.luck, linux-kernel, linux-alpha,
	schwidefsky, davem

This series implements killable version of down_read()
similar to already existing down_write_killable() function.
Patches [1-2/7] add arch-independent low-level primitives
for the both rwsem types.

Patches [3-6/7] add arch-dependent primitives for
the architectures, that use rwsem-xadd implementation.
The assembly code was modified in x86 case only, the rest
of architectures does not need such change.

I tested the series in x86 (which uses RWSEM_XCHGADD_ALGORITHM
config option), and also the RWSEM_GENERIC_SPINLOCK case,
which were manually written in Kconfig. alpha, ia64 and s390
are compile-tested only, but I believe, their changes are
pretty easy. Please, people, who work with them, take your
look at the corresponding patches.

***

Where this came from. Cycle of creation/destroying net
namespace is slow at the moment as it's made under net_mutex.
During cleanup_net(), the mutex is held the whole time,
while RCU is synchronizing. This takes a lot of time,
especially when RCU is preemptible, and at this time
the creation of new net namespaces is blocked. But
this moment may be optimized by using small locks
in pernet callbacks, where they are need, and by
converting net_mutex into rw semaphore. In cleanup_net()
it's only need to prevent from registration/unregistration
of pernet_subsys and pernet_devices, which actions
can't be missed by unhashed dead net namespace.
down_read guarantees that, and it's lite-weight.

Using the rwsem improves the create/destroy cycle performance
on my development kernel much:

$time for i in {1..10000}; do unshare -n -- bash -c exit; done

MUTEX:
real 1m13,372s
user 0m9,278s
sys 0m17.181s

RWSEM:
real 0m17,482s
user 0m3,791s
sys 0m13,723s

Of course, it's just an example, and it's a generic use
function. It may be used in other places.

---

Kirill Tkhai (7):
      rwsem-spinlock: Add killable versions of __down_read()
      rwsem-spinlock: Add killable versions of rwsem_down_read_failed()
      alpha: Add __down_read_killable()
      ia64: Add __down_read_killable()
      s390: Add __down_read_killable()
      x86: Add __down_read_killable()
      rwsem: Add down_read_killable()


 arch/alpha/include/asm/rwsem.h  |   18 ++++++++++++++++--
 arch/ia64/include/asm/rwsem.h   |   22 +++++++++++++++++++---
 arch/s390/include/asm/rwsem.h   |   18 ++++++++++++++++--
 arch/x86/include/asm/rwsem.h    |   37 +++++++++++++++++++++++++++----------
 arch/x86/lib/rwsem.S            |   12 ++++++++++++
 include/asm-generic/rwsem.h     |    8 ++++++++
 include/linux/rwsem-spinlock.h  |    1 +
 include/linux/rwsem.h           |    2 ++
 kernel/locking/rwsem-spinlock.c |   37 ++++++++++++++++++++++++++++---------
 kernel/locking/rwsem-xadd.c     |   33 ++++++++++++++++++++++++++++++---
 kernel/locking/rwsem.c          |   16 ++++++++++++++++
 11 files changed, 175 insertions(+), 29 deletions(-)

--
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/7] rwsem-spinlock: Add killable versions of __down_read()
  2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
@ 2017-06-19 18:02 ` Kirill Tkhai
  2017-08-10 12:11   ` [tip:locking/core] locking/rwsem-spinlock: " tip-bot for Kirill Tkhai
  2017-06-19 18:02 ` [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed() Kirill Tkhai
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Kirill Tkhai @ 2017-06-19 18:02 UTC (permalink / raw)
  To: linux-ia64, avagin, peterz, heiko.carstens, hpa, gorcunov,
	linux-arch, linux-s390, x86, mingo, mattst88, fenghua.yu, arnd,
	ktkhai, ink, tglx, rth, tony.luck, linux-kernel, linux-alpha,
	schwidefsky, davem

Rename __down_read() in __down_read_common() and teach it
to abort waiting in case of pending signals and killable
state argument passed.

Note, that we shouldn't wake anybody up in EINTR path, as:

We check for signal_pending_state() after (!waiter.task)
test and under spinlock. So, current task wasn't able to
be woken up. It may be in two cases: a writer is owner
of the sem, or a writer is a first waiter of the sem.

If a writer is owner of the sem, no one else may work
with it in parallel. It will wake somebody, when it
call up_write() or downgrade_write().

If a writer is the first waiter, it will be woken up,
when the last active reader releases the sem, and
sem->count became 0.

Also note, that set_current_state() may be moved down
to schedule() (after !waiter.task check), as all
assignments in this type of semaphore (including wake_up),
occur under spinlock, so we can't miss anything.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/linux/rwsem-spinlock.h  |    1 +
 kernel/locking/rwsem-spinlock.c |   37 ++++++++++++++++++++++++++++---------
 2 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/include/linux/rwsem-spinlock.h b/include/linux/rwsem-spinlock.h
index ae0528b834cd..e784761a4443 100644
--- a/include/linux/rwsem-spinlock.h
+++ b/include/linux/rwsem-spinlock.h
@@ -32,6 +32,7 @@ struct rw_semaphore {
 #define RWSEM_UNLOCKED_VALUE		0x00000000
 
 extern void __down_read(struct rw_semaphore *sem);
+extern int __must_check __down_read_killable(struct rw_semaphore *sem);
 extern int __down_read_trylock(struct rw_semaphore *sem);
 extern void __down_write(struct rw_semaphore *sem);
 extern int __must_check __down_write_killable(struct rw_semaphore *sem);
diff --git a/kernel/locking/rwsem-spinlock.c b/kernel/locking/rwsem-spinlock.c
index 20819df98125..0848634c5512 100644
--- a/kernel/locking/rwsem-spinlock.c
+++ b/kernel/locking/rwsem-spinlock.c
@@ -126,7 +126,7 @@ __rwsem_wake_one_writer(struct rw_semaphore *sem)
 /*
  * get a read lock on the semaphore
  */
-void __sched __down_read(struct rw_semaphore *sem)
+int __sched __down_read_common(struct rw_semaphore *sem, int state)
 {
 	struct rwsem_waiter waiter;
 	unsigned long flags;
@@ -140,8 +140,6 @@ void __sched __down_read(struct rw_semaphore *sem)
 		goto out;
 	}
 
-	set_current_state(TASK_UNINTERRUPTIBLE);
-
 	/* set up my own style of waitqueue */
 	waiter.task = current;
 	waiter.type = RWSEM_WAITING_FOR_READ;
@@ -149,20 +147,41 @@ void __sched __down_read(struct rw_semaphore *sem)
 
 	list_add_tail(&waiter.list, &sem->wait_list);
 
-	/* we don't need to touch the semaphore struct anymore */
-	raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
-
 	/* wait to be given the lock */
 	for (;;) {
 		if (!waiter.task)
 			break;
+		if (signal_pending_state(state, current))
+			goto out_nolock;
+		set_current_state(state);
+		raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
 		schedule();
-		set_current_state(TASK_UNINTERRUPTIBLE);
+		raw_spin_lock_irqsave(&sem->wait_lock, flags);
 	}
 
-	__set_current_state(TASK_RUNNING);
+	raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
  out:
-	;
+	return 0;
+
+out_nolock:
+	/*
+	 * We didn't take the lock, so that there is a writer, which
+	 * is owner or the first waiter of the sem. If it's a waiter,
+	 * it will be woken by current owner. Not need to wake anybody.
+	 */
+	list_del(&waiter.list);
+	raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
+	return -EINTR;
+}
+
+void __sched __down_read(struct rw_semaphore *sem)
+{
+	__down_read_common(sem, TASK_UNINTERRUPTIBLE);
+}
+
+int __sched __down_read_killable(struct rw_semaphore *sem)
+{
+	return __down_read_common(sem, TASK_KILLABLE);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed()
  2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
  2017-06-19 18:02 ` [PATCH 1/7] rwsem-spinlock: Add killable versions of __down_read() Kirill Tkhai
@ 2017-06-19 18:02 ` Kirill Tkhai
  2017-07-06  8:04   ` Peter Zijlstra
  2017-08-10 12:12   ` [tip:locking/core] locking/rwsem-xadd: " tip-bot for Kirill Tkhai
  2017-06-19 18:02 ` [PATCH 3/7] alpha: Add __down_read_killable() Kirill Tkhai
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 15+ messages in thread
From: Kirill Tkhai @ 2017-06-19 18:02 UTC (permalink / raw)
  To: linux-ia64, avagin, peterz, heiko.carstens, hpa, gorcunov,
	linux-arch, linux-s390, x86, mingo, mattst88, fenghua.yu, arnd,
	ktkhai, ink, tglx, rth, tony.luck, linux-kernel, linux-alpha,
	schwidefsky, davem

Rename rwsem_down_read_failed() in __rwsem_down_read_failed_common()
and teach it to abort waiting in case of pending signals and killable
state argument passed.

Note, that we shouldn't wake anybody up in EINTR path, as:

We check for (waiter.task) under spinlock before we go to out_nolock
path. Current task wasn't able to be woken up, so there are
a writer, owning the sem, or a writer, which is the first waiter.
In the both cases we shouldn't wake anybody. If there is a writer,
owning the sem, and we were the only waiter, remove RWSEM_WAITING_BIAS,
as there are no waiters anymore.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/linux/rwsem.h       |    1 +
 kernel/locking/rwsem-xadd.c |   33 ++++++++++++++++++++++++++++++---
 2 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index dd1d14250340..0ad7318ff299 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -44,6 +44,7 @@ struct rw_semaphore {
 };
 
 extern struct rw_semaphore *rwsem_down_read_failed(struct rw_semaphore *sem);
+extern struct rw_semaphore *rwsem_down_read_failed_killable(struct rw_semaphore *sem);
 extern struct rw_semaphore *rwsem_down_write_failed(struct rw_semaphore *sem);
 extern struct rw_semaphore *rwsem_down_write_failed_killable(struct rw_semaphore *sem);
 extern struct rw_semaphore *rwsem_wake(struct rw_semaphore *);
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 34e727f18e49..02f660666ab8 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -221,8 +221,8 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
 /*
  * Wait for the read lock to be granted
  */
-__visible
-struct rw_semaphore __sched *rwsem_down_read_failed(struct rw_semaphore *sem)
+static inline struct rw_semaphore __sched *
+__rwsem_down_read_failed_common(struct rw_semaphore *sem, int state)
 {
 	long count, adjustment = -RWSEM_ACTIVE_READ_BIAS;
 	struct rwsem_waiter waiter;
@@ -255,17 +255,44 @@ struct rw_semaphore __sched *rwsem_down_read_failed(struct rw_semaphore *sem)
 
 	/* wait to be given the lock */
 	while (true) {
-		set_current_state(TASK_UNINTERRUPTIBLE);
+		set_current_state(state);
 		if (!waiter.task)
 			break;
+		if (signal_pending_state(state, current)) {
+			raw_spin_lock_irq(&sem->wait_lock);
+			if (waiter.task)
+				goto out_nolock;
+			raw_spin_unlock_irq(&sem->wait_lock);
+			break;
+		}
 		schedule();
 	}
 
 	__set_current_state(TASK_RUNNING);
 	return sem;
+out_nolock:
+	list_del(&waiter.list);
+	if (list_empty(&sem->wait_list))
+		atomic_long_add(-RWSEM_WAITING_BIAS, &sem->count);
+	raw_spin_unlock_irq(&sem->wait_lock);
+	__set_current_state(TASK_RUNNING);
+	return ERR_PTR(-EINTR);
+}
+
+__visible struct rw_semaphore * __sched
+rwsem_down_read_failed(struct rw_semaphore *sem)
+{
+	return __rwsem_down_read_failed_common(sem, TASK_UNINTERRUPTIBLE);
 }
 EXPORT_SYMBOL(rwsem_down_read_failed);
 
+__visible struct rw_semaphore * __sched
+rwsem_down_read_failed_killable(struct rw_semaphore *sem)
+{
+	return __rwsem_down_read_failed_common(sem, TASK_KILLABLE);
+}
+EXPORT_SYMBOL(rwsem_down_read_failed_killable);
+
 /*
  * This function must be called with the sem->wait_lock held to prevent
  * race conditions between checking the rwsem wait list and setting the

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/7] alpha: Add __down_read_killable()
  2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
  2017-06-19 18:02 ` [PATCH 1/7] rwsem-spinlock: Add killable versions of __down_read() Kirill Tkhai
  2017-06-19 18:02 ` [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed() Kirill Tkhai
@ 2017-06-19 18:02 ` Kirill Tkhai
  2017-06-19 18:02 ` [PATCH 4/7] ia64: " Kirill Tkhai
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Kirill Tkhai @ 2017-06-19 18:02 UTC (permalink / raw)
  To: linux-ia64, avagin, peterz, heiko.carstens, hpa, gorcunov,
	linux-arch, linux-s390, x86, mingo, mattst88, fenghua.yu, arnd,
	ktkhai, ink, tglx, rth, tony.luck, linux-kernel, linux-alpha,
	schwidefsky, davem

Similar to __down_write_killable(), and read killable primitive.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 arch/alpha/include/asm/rwsem.h |   18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/alpha/include/asm/rwsem.h b/arch/alpha/include/asm/rwsem.h
index 77873d0ad293..7118bd090085 100644
--- a/arch/alpha/include/asm/rwsem.h
+++ b/arch/alpha/include/asm/rwsem.h
@@ -21,7 +21,7 @@
 #define RWSEM_ACTIVE_READ_BIAS		RWSEM_ACTIVE_BIAS
 #define RWSEM_ACTIVE_WRITE_BIAS		(RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
 
-static inline void __down_read(struct rw_semaphore *sem)
+static inline int ___down_read(struct rw_semaphore *sem)
 {
 	long oldcount;
 #ifndef	CONFIG_SMP
@@ -41,10 +41,24 @@ static inline void __down_read(struct rw_semaphore *sem)
 	:"=&r" (oldcount), "=m" (sem->count), "=&r" (temp)
 	:"Ir" (RWSEM_ACTIVE_READ_BIAS), "m" (sem->count) : "memory");
 #endif
-	if (unlikely(oldcount < 0))
+	return (oldcount < 0);
+}
+
+static inline void __down_read(struct rw_semaphore *sem)
+{
+	if (unlikely(___down_read(sem)))
 		rwsem_down_read_failed(sem);
 }
 
+static inline int __down_read_killable(struct rw_semaphore *sem)
+{
+	if (unlikely(___down_read(sem)))
+		if (IS_ERR(rwsem_down_read_failed_killable(sem)))
+			return -EINTR;
+
+	return 0;
+}
+
 /*
  * trylock for reading -- returns 1 if successful, 0 if contention
  */

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/7] ia64: Add __down_read_killable()
  2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
                   ` (2 preceding siblings ...)
  2017-06-19 18:02 ` [PATCH 3/7] alpha: Add __down_read_killable() Kirill Tkhai
@ 2017-06-19 18:02 ` Kirill Tkhai
  2017-06-19 18:03 ` [PATCH 5/7] s390: " Kirill Tkhai
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Kirill Tkhai @ 2017-06-19 18:02 UTC (permalink / raw)
  To: linux-ia64, avagin, peterz, heiko.carstens, hpa, gorcunov,
	linux-arch, linux-s390, x86, mingo, mattst88, fenghua.yu, arnd,
	ktkhai, ink, tglx, rth, tony.luck, linux-kernel, linux-alpha,
	schwidefsky, davem

Similar to __down_write_killable(), and read killable primitive.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 arch/ia64/include/asm/rwsem.h |   22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/include/asm/rwsem.h b/arch/ia64/include/asm/rwsem.h
index 8fa98dd303b4..1fb8b7cb1c98 100644
--- a/arch/ia64/include/asm/rwsem.h
+++ b/arch/ia64/include/asm/rwsem.h
@@ -37,15 +37,31 @@
 /*
  * lock for reading
  */
-static inline void
-__down_read (struct rw_semaphore *sem)
+static inline int
+___down_read (struct rw_semaphore *sem)
 {
 	long result = ia64_fetchadd8_acq((unsigned long *)&sem->count.counter, 1);
 
-	if (result < 0)
+	return (result < 0);
+}
+
+static inline void
+__down_read (struct rw_semaphore *sem)
+{
+	if (___down_read(sem))
 		rwsem_down_read_failed(sem);
 }
 
+static inline int
+__down_read_killable (struct rw_semaphore *sem)
+{
+	if (___down_read(sem))
+		if (IS_ERR(rwsem_down_read_failed_killable(sem)))
+			return -EINTR;
+
+	return 0;
+}
+
 /*
  * lock for writing
  */

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/7] s390: Add __down_read_killable()
  2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
                   ` (3 preceding siblings ...)
  2017-06-19 18:02 ` [PATCH 4/7] ia64: " Kirill Tkhai
@ 2017-06-19 18:03 ` Kirill Tkhai
  2017-06-19 18:03 ` [PATCH 6/7] x86: " Kirill Tkhai
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Kirill Tkhai @ 2017-06-19 18:03 UTC (permalink / raw)
  To: linux-ia64, avagin, peterz, heiko.carstens, hpa, gorcunov,
	linux-arch, linux-s390, x86, mingo, mattst88, fenghua.yu, arnd,
	ktkhai, ink, tglx, rth, tony.luck, linux-kernel, linux-alpha,
	schwidefsky, davem

Similar to __down_write_killable(), and read killable primitive.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 arch/s390/include/asm/rwsem.h |   18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/s390/include/asm/rwsem.h b/arch/s390/include/asm/rwsem.h
index 597e7e96b59e..6bef7d5fbf1c 100644
--- a/arch/s390/include/asm/rwsem.h
+++ b/arch/s390/include/asm/rwsem.h
@@ -49,7 +49,7 @@
 /*
  * lock for reading
  */
-static inline void __down_read(struct rw_semaphore *sem)
+static inline int ___down_read(struct rw_semaphore *sem)
 {
 	signed long old, new;
 
@@ -62,10 +62,24 @@ static inline void __down_read(struct rw_semaphore *sem)
 		: "=&d" (old), "=&d" (new), "=Q" (sem->count)
 		: "Q" (sem->count), "i" (RWSEM_ACTIVE_READ_BIAS)
 		: "cc", "memory");
-	if (old < 0)
+	return (old < 0);
+}
+
+static inline void __down_read(struct rw_semaphore *sem)
+{
+	if (___down_read(sem))
 		rwsem_down_read_failed(sem);
 }
 
+static inline int __down_read_killable(struct rw_semaphore *sem)
+{
+	if (___down_read(sem))
+		if (IS_ERR(rwsem_down_read_failed_killable(sem)))
+			return -EINTR;
+
+	return 0;
+}
+
 /*
  * trylock for reading -- returns 1 if successful, 0 if contention
  */

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 6/7] x86: Add __down_read_killable()
  2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
                   ` (4 preceding siblings ...)
  2017-06-19 18:03 ` [PATCH 5/7] s390: " Kirill Tkhai
@ 2017-06-19 18:03 ` Kirill Tkhai
  2017-06-19 18:03 ` [PATCH 7/7] rwsem: Add down_read_killable() Kirill Tkhai
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Kirill Tkhai @ 2017-06-19 18:03 UTC (permalink / raw)
  To: linux-ia64, avagin, peterz, heiko.carstens, hpa, gorcunov,
	linux-arch, linux-s390, x86, mingo, mattst88, fenghua.yu, arnd,
	ktkhai, ink, tglx, rth, tony.luck, linux-kernel, linux-alpha,
	schwidefsky, davem

Similar to __down_write_killable(), add read killable primitive:
extract current __down_read() code to macros and teach it to get
different functions as slow_path argument:
store ax register to ret, and add sp register and preserve its value.

Add call_rwsem_down_read_failed_killable() assembly entry similar
to call_rwsem_down_read_failed():
push dx register to stack in additional to common registers,
as it's not declarated as modifiable in ____down_read().

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 arch/x86/include/asm/rwsem.h |   37 +++++++++++++++++++++++++++----------
 arch/x86/lib/rwsem.S         |   12 ++++++++++++
 2 files changed, 39 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h
index a34e0d4b957d..0fe7b0aee266 100644
--- a/arch/x86/include/asm/rwsem.h
+++ b/arch/x86/include/asm/rwsem.h
@@ -60,18 +60,35 @@
 /*
  * lock for reading
  */
+#define ____down_read(sem, slow_path)					\
+({									\
+	struct rw_semaphore* ret;					\
+	register void *__sp asm(_ASM_SP);				\
+	asm volatile("# beginning down_read\n\t"			\
+		     LOCK_PREFIX _ASM_INC "(%3)\n\t"			\
+		     /* adds 0x00000001 */				\
+		     "  jns        1f\n"				\
+		     "  call " slow_path "\n"				\
+		     "1:\n\t"						\
+		     "# ending down_read\n\t"				\
+		     : "+m" (sem->count), "=a" (ret), "+r" (__sp)	\
+		     : "a" (sem)					\
+		     : "memory", "cc");					\
+									\
+	ret;								\
+})
+
 static inline void __down_read(struct rw_semaphore *sem)
 {
-	asm volatile("# beginning down_read\n\t"
-		     LOCK_PREFIX _ASM_INC "(%1)\n\t"
-		     /* adds 0x00000001 */
-		     "  jns        1f\n"
-		     "  call call_rwsem_down_read_failed\n"
-		     "1:\n\t"
-		     "# ending down_read\n\t"
-		     : "+m" (sem->count)
-		     : "a" (sem)
-		     : "memory", "cc");
+	____down_read(sem, "call_rwsem_down_read_failed");
+}
+
+static inline int __down_read_killable(struct rw_semaphore *sem)
+{
+	if (IS_ERR(____down_read(sem, "call_rwsem_down_read_failed_killable")))
+		return -EINTR;
+
+	return 0;
 }
 
 /*
diff --git a/arch/x86/lib/rwsem.S b/arch/x86/lib/rwsem.S
index bf2c6074efd2..dc2ab6ea6768 100644
--- a/arch/x86/lib/rwsem.S
+++ b/arch/x86/lib/rwsem.S
@@ -98,6 +98,18 @@ ENTRY(call_rwsem_down_read_failed)
 	ret
 ENDPROC(call_rwsem_down_read_failed)
 
+ENTRY(call_rwsem_down_read_failed_killable)
+	FRAME_BEGIN
+	save_common_regs
+	__ASM_SIZE(push,) %__ASM_REG(dx)
+	movq %rax,%rdi
+	call rwsem_down_read_failed_killable
+	__ASM_SIZE(pop,) %__ASM_REG(dx)
+	restore_common_regs
+	FRAME_END
+	ret
+ENDPROC(call_rwsem_down_read_failed_killable)
+
 ENTRY(call_rwsem_down_write_failed)
 	FRAME_BEGIN
 	save_common_regs

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 7/7] rwsem: Add down_read_killable()
  2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
                   ` (5 preceding siblings ...)
  2017-06-19 18:03 ` [PATCH 6/7] x86: " Kirill Tkhai
@ 2017-06-19 18:03 ` Kirill Tkhai
  2017-06-19 20:27 ` [PATCH 0/7] rwsem: Implement down_read_killable() David Rientjes
  2017-06-20  8:30 ` David Howells
  8 siblings, 0 replies; 15+ messages in thread
From: Kirill Tkhai @ 2017-06-19 18:03 UTC (permalink / raw)
  To: linux-ia64, avagin, peterz, heiko.carstens, hpa, gorcunov,
	linux-arch, linux-s390, x86, mingo, mattst88, fenghua.yu, arnd,
	ktkhai, ink, tglx, rth, tony.luck, linux-kernel, linux-alpha,
	schwidefsky, davem

Similar to down_read() and down_write_killable(),
add killable version of down_read(), based on
__down_read_killable() function, added in previous
patches.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/asm-generic/rwsem.h |    8 ++++++++
 include/linux/rwsem.h       |    1 +
 kernel/locking/rwsem.c      |   16 ++++++++++++++++
 3 files changed, 25 insertions(+)

diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
index 6c6a2141f271..2f71b913b555 100644
--- a/include/asm-generic/rwsem.h
+++ b/include/asm-generic/rwsem.h
@@ -37,6 +37,14 @@ static inline void __down_read(struct rw_semaphore *sem)
 		rwsem_down_read_failed(sem);
 }
 
+static inline int __down_read_killable(struct rw_semaphore *sem)
+{
+	if (unlikely(atomic_long_inc_return_acquire(&sem->count) <= 0))
+		if (IS_ERR(rwsem_down_read_failed_killable(sem)))
+			return -EINTR;
+	return 0;
+}
+
 static inline int __down_read_trylock(struct rw_semaphore *sem)
 {
 	long tmp;
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index 0ad7318ff299..6ac8ee5f15dd 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -111,6 +111,7 @@ static inline int rwsem_is_contended(struct rw_semaphore *sem)
  * lock for reading
  */
 extern void down_read(struct rw_semaphore *sem);
+extern int __must_check down_read_killable(struct rw_semaphore *sem);
 
 /*
  * trylock for reading -- returns 1 if successful, 0 if contention
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 4d48b1c4870d..e53f7746d9fd 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -28,6 +28,22 @@ void __sched down_read(struct rw_semaphore *sem)
 
 EXPORT_SYMBOL(down_read);
 
+int __sched down_read_killable(struct rw_semaphore *sem)
+{
+	might_sleep();
+	rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
+
+	if (LOCK_CONTENDED_RETURN(sem, __down_read_trylock, __down_read_killable)) {
+		rwsem_release(&sem->dep_map, 1, _RET_IP_);
+		return -EINTR;
+	}
+
+	rwsem_set_reader_owned(sem);
+	return 0;
+}
+
+EXPORT_SYMBOL(down_read_killable);
+
 /*
  * trylock for reading -- returns 1 if successful, 0 if contention
  */

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/7] rwsem: Implement down_read_killable()
  2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
                   ` (6 preceding siblings ...)
  2017-06-19 18:03 ` [PATCH 7/7] rwsem: Add down_read_killable() Kirill Tkhai
@ 2017-06-19 20:27 ` David Rientjes
  2017-06-20  8:30 ` David Howells
  8 siblings, 0 replies; 15+ messages in thread
From: David Rientjes @ 2017-06-19 20:27 UTC (permalink / raw)
  To: Kirill Tkhai
  Cc: linux-ia64, avagin, peterz, heiko.carstens, hpa, gorcunov,
	linux-arch, linux-s390, x86, mingo, mattst88, fenghua.yu, arnd,
	ink, tglx, rth, tony.luck, linux-kernel, linux-alpha,
	schwidefsky, davem

On Mon, 19 Jun 2017, Kirill Tkhai wrote:

> This series implements killable version of down_read()
> similar to already existing down_write_killable() function.
> Patches [1-2/7] add arch-independent low-level primitives
> for the both rwsem types.
> 
> Patches [3-6/7] add arch-dependent primitives for
> the architectures, that use rwsem-xadd implementation.
> The assembly code was modified in x86 case only, the rest
> of architectures does not need such change.
> 
> I tested the series in x86 (which uses RWSEM_XCHGADD_ALGORITHM
> config option), and also the RWSEM_GENERIC_SPINLOCK case,
> which were manually written in Kconfig. alpha, ia64 and s390
> are compile-tested only, but I believe, their changes are
> pretty easy. Please, people, who work with them, take your
> look at the corresponding patches.
> 

I would have expected to see down_read_killable() actually used somewhere 
after its implementation as part of this patchset.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/7] rwsem: Implement down_read_killable()
  2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
                   ` (7 preceding siblings ...)
  2017-06-19 20:27 ` [PATCH 0/7] rwsem: Implement down_read_killable() David Rientjes
@ 2017-06-20  8:30 ` David Howells
  2017-06-20 10:36   ` Kirill Tkhai
  8 siblings, 1 reply; 15+ messages in thread
From: David Howells @ 2017-06-20  8:30 UTC (permalink / raw)
  To: David Rientjes
  Cc: dhowells, Kirill Tkhai, linux-ia64, avagin, peterz,
	heiko.carstens, hpa, gorcunov, linux-arch, linux-s390, x86,
	mingo, mattst88, fenghua.yu, arnd, ink, tglx, rth, tony.luck,
	linux-kernel, linux-alpha, schwidefsky, davem

David Rientjes <rientjes@google.com> wrote:

> I would have expected to see down_read_killable() actually used somewhere 
> after its implementation as part of this patchset.

There are some places we should be using down_{read|write}_interruptible(), if
it existed, dressed as inode_lock{,_shared}_interruptible().

David

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/7] rwsem: Implement down_read_killable()
  2017-06-20  8:30 ` David Howells
@ 2017-06-20 10:36   ` Kirill Tkhai
  0 siblings, 0 replies; 15+ messages in thread
From: Kirill Tkhai @ 2017-06-20 10:36 UTC (permalink / raw)
  To: David Howells, Al Viro
  Cc: David Rientjes, linux-ia64, avagin, peterz, heiko.carstens, hpa,
	gorcunov, linux-arch, linux-s390, x86, mingo, mattst88,
	fenghua.yu, arnd, ink, tglx, rth, tony.luck, linux-kernel,
	linux-alpha, schwidefsky, davem

On Tue, Jun 20, 2017 at 09:30, David Howells wrote:
> David Rientjes <rientjes@google.com> wrote:
> 
> > I would have expected to see down_read_killable() actually used somewhere 
> > after its implementation as part of this patchset.
> 
> There are some places we should be using down_{read|write}_interruptible(), if
> it existed, dressed as inode_lock{,_shared}_interruptible().

Then let's use it in iterate_dir():

[PATCH]fs: Use killable down_read() in iterate_dir()

There was mutex_lock_interruptible() initially, and it was changed
to rwsem, but there were not killable rwsem primitives that time.
>From commit 9902af79c01a:
    
    "The main issue is the lack of down_write_killable(), so the places
     like readdir.c switched to plain inode_lock(); once killable
     variants of rwsem primitives appear, that'll be dealt with"

Use down_read_killable() same as down_write_killable() in !shared
case, as concurrent inode_lock() may take much time, that may be
wanted to be interrupted by user.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 fs/readdir.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/fs/readdir.c b/fs/readdir.c
index 89659549c09d..7c584bbb4ce3 100644
--- a/fs/readdir.c
+++ b/fs/readdir.c
@@ -36,13 +36,12 @@ int iterate_dir(struct file *file, struct dir_context *ctx)
 	if (res)
 		goto out;
 
-	if (shared) {
-		inode_lock_shared(inode);
-	} else {
+	if (shared)
+		res = down_read_killable(&inode->i_rwsem);
+	else
 		res = down_write_killable(&inode->i_rwsem);
-		if (res)
-			goto out;
-	}
+	if (res)
+		goto out;
 
 	res = -ENOENT;
 	if (!IS_DEADDIR(inode)) {

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed()
  2017-06-19 18:02 ` [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed() Kirill Tkhai
@ 2017-07-06  8:04   ` Peter Zijlstra
  2017-07-06  9:45     ` Kirill Tkhai
  2017-08-10 12:12   ` [tip:locking/core] locking/rwsem-xadd: " tip-bot for Kirill Tkhai
  1 sibling, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2017-07-06  8:04 UTC (permalink / raw)
  To: Kirill Tkhai
  Cc: linux-ia64, avagin, heiko.carstens, hpa, gorcunov, linux-arch,
	linux-s390, x86, mingo, mattst88, fenghua.yu, arnd, ink, tglx,
	rth, tony.luck, linux-kernel, linux-alpha, schwidefsky, davem

On Mon, Jun 19, 2017 at 09:02:26PM +0300, Kirill Tkhai wrote:

> Subject: [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed()

>  kernel/locking/rwsem-xadd.c |   33 ++++++++++++++++++++++++++++++---

Fixed that subject for your ;-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed()
  2017-07-06  8:04   ` Peter Zijlstra
@ 2017-07-06  9:45     ` Kirill Tkhai
  0 siblings, 0 replies; 15+ messages in thread
From: Kirill Tkhai @ 2017-07-06  9:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-ia64, avagin, heiko.carstens, hpa, gorcunov, linux-arch,
	linux-s390, x86, mingo, mattst88, fenghua.yu, arnd, ink, tglx,
	rth, tony.luck, linux-kernel, linux-alpha, schwidefsky, davem

On 06.07.2017 11:04, Peter Zijlstra wrote:
> On Mon, Jun 19, 2017 at 09:02:26PM +0300, Kirill Tkhai wrote:
> 
>> Subject: [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed()
> 
>>  kernel/locking/rwsem-xadd.c |   33 ++++++++++++++++++++++++++++++---
> 
> Fixed that subject for your ;-)

Thanks :)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [tip:locking/core] locking/rwsem-spinlock: Add killable versions of __down_read()
  2017-06-19 18:02 ` [PATCH 1/7] rwsem-spinlock: Add killable versions of __down_read() Kirill Tkhai
@ 2017-08-10 12:11   ` tip-bot for Kirill Tkhai
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot for Kirill Tkhai @ 2017-08-10 12:11 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, tglx, linux-kernel, ktkhai, peterz, torvalds, mingo

Commit-ID:  0aa1125fa8bc5e5f98317156728fa4d0293561a5
Gitweb:     http://git.kernel.org/tip/0aa1125fa8bc5e5f98317156728fa4d0293561a5
Author:     Kirill Tkhai <ktkhai@virtuozzo.com>
AuthorDate: Mon, 19 Jun 2017 21:02:12 +0300
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 10 Aug 2017 12:28:55 +0200

locking/rwsem-spinlock: Add killable versions of __down_read()

Rename __down_read() in __down_read_common() and teach it
to abort waiting in case of pending signals and killable
state argument passed.

Note, that we shouldn't wake anybody up in EINTR path, as:

We check for signal_pending_state() after (!waiter.task)
test and under spinlock. So, current task wasn't able to
be woken up. It may be in two cases: a writer is owner
of the sem, or a writer is a first waiter of the sem.

If a writer is owner of the sem, no one else may work
with it in parallel. It will wake somebody, when it
call up_write() or downgrade_write().

If a writer is the first waiter, it will be woken up,
when the last active reader releases the sem, and
sem->count became 0.

Also note, that set_current_state() may be moved down
to schedule() (after !waiter.task check), as all
assignments in this type of semaphore (including wake_up),
occur under spinlock, so we can't miss anything.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arnd@arndb.de
Cc: avagin@virtuozzo.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: gorcunov@virtuozzo.com
Cc: heiko.carstens@de.ibm.com
Cc: hpa@zytor.com
Cc: ink@jurassic.park.msu.ru
Cc: mattst88@gmail.com
Cc: rth@twiddle.net
Cc: schwidefsky@de.ibm.com
Cc: tony.luck@intel.com
Link: http://lkml.kernel.org/r/149789533283.9059.9829416940494747182.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/rwsem-spinlock.h  |  1 +
 kernel/locking/rwsem-spinlock.c | 37 ++++++++++++++++++++++++++++---------
 2 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/include/linux/rwsem-spinlock.h b/include/linux/rwsem-spinlock.h
index ae0528b..e784761 100644
--- a/include/linux/rwsem-spinlock.h
+++ b/include/linux/rwsem-spinlock.h
@@ -32,6 +32,7 @@ struct rw_semaphore {
 #define RWSEM_UNLOCKED_VALUE		0x00000000
 
 extern void __down_read(struct rw_semaphore *sem);
+extern int __must_check __down_read_killable(struct rw_semaphore *sem);
 extern int __down_read_trylock(struct rw_semaphore *sem);
 extern void __down_write(struct rw_semaphore *sem);
 extern int __must_check __down_write_killable(struct rw_semaphore *sem);
diff --git a/kernel/locking/rwsem-spinlock.c b/kernel/locking/rwsem-spinlock.c
index 20819df..0848634 100644
--- a/kernel/locking/rwsem-spinlock.c
+++ b/kernel/locking/rwsem-spinlock.c
@@ -126,7 +126,7 @@ __rwsem_wake_one_writer(struct rw_semaphore *sem)
 /*
  * get a read lock on the semaphore
  */
-void __sched __down_read(struct rw_semaphore *sem)
+int __sched __down_read_common(struct rw_semaphore *sem, int state)
 {
 	struct rwsem_waiter waiter;
 	unsigned long flags;
@@ -140,8 +140,6 @@ void __sched __down_read(struct rw_semaphore *sem)
 		goto out;
 	}
 
-	set_current_state(TASK_UNINTERRUPTIBLE);
-
 	/* set up my own style of waitqueue */
 	waiter.task = current;
 	waiter.type = RWSEM_WAITING_FOR_READ;
@@ -149,20 +147,41 @@ void __sched __down_read(struct rw_semaphore *sem)
 
 	list_add_tail(&waiter.list, &sem->wait_list);
 
-	/* we don't need to touch the semaphore struct anymore */
-	raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
-
 	/* wait to be given the lock */
 	for (;;) {
 		if (!waiter.task)
 			break;
+		if (signal_pending_state(state, current))
+			goto out_nolock;
+		set_current_state(state);
+		raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
 		schedule();
-		set_current_state(TASK_UNINTERRUPTIBLE);
+		raw_spin_lock_irqsave(&sem->wait_lock, flags);
 	}
 
-	__set_current_state(TASK_RUNNING);
+	raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
  out:
-	;
+	return 0;
+
+out_nolock:
+	/*
+	 * We didn't take the lock, so that there is a writer, which
+	 * is owner or the first waiter of the sem. If it's a waiter,
+	 * it will be woken by current owner. Not need to wake anybody.
+	 */
+	list_del(&waiter.list);
+	raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
+	return -EINTR;
+}
+
+void __sched __down_read(struct rw_semaphore *sem)
+{
+	__down_read_common(sem, TASK_UNINTERRUPTIBLE);
+}
+
+int __sched __down_read_killable(struct rw_semaphore *sem)
+{
+	return __down_read_common(sem, TASK_KILLABLE);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [tip:locking/core] locking/rwsem-xadd: Add killable versions of rwsem_down_read_failed()
  2017-06-19 18:02 ` [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed() Kirill Tkhai
  2017-07-06  8:04   ` Peter Zijlstra
@ 2017-08-10 12:12   ` tip-bot for Kirill Tkhai
  1 sibling, 0 replies; 15+ messages in thread
From: tip-bot for Kirill Tkhai @ 2017-08-10 12:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, torvalds, mingo, linux-kernel, tglx, ktkhai, peterz

Commit-ID:  83ced169d9a01f22eb39f1fcc1f89ad9d223238f
Gitweb:     http://git.kernel.org/tip/83ced169d9a01f22eb39f1fcc1f89ad9d223238f
Author:     Kirill Tkhai <ktkhai@virtuozzo.com>
AuthorDate: Mon, 19 Jun 2017 21:02:26 +0300
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 10 Aug 2017 12:28:55 +0200

locking/rwsem-xadd: Add killable versions of rwsem_down_read_failed()

Rename rwsem_down_read_failed() in __rwsem_down_read_failed_common()
and teach it to abort waiting in case of pending signals and killable
state argument passed.

Note, that we shouldn't wake anybody up in EINTR path, as:

We check for (waiter.task) under spinlock before we go to out_nolock
path. Current task wasn't able to be woken up, so there are
a writer, owning the sem, or a writer, which is the first waiter.
In the both cases we shouldn't wake anybody. If there is a writer,
owning the sem, and we were the only waiter, remove RWSEM_WAITING_BIAS,
as there are no waiters anymore.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arnd@arndb.de
Cc: avagin@virtuozzo.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: gorcunov@virtuozzo.com
Cc: heiko.carstens@de.ibm.com
Cc: hpa@zytor.com
Cc: ink@jurassic.park.msu.ru
Cc: mattst88@gmail.com
Cc: rth@twiddle.net
Cc: schwidefsky@de.ibm.com
Cc: tony.luck@intel.com
Link: http://lkml.kernel.org/r/149789534632.9059.2901382369609922565.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/rwsem.h       |  1 +
 kernel/locking/rwsem-xadd.c | 33 ++++++++++++++++++++++++++++++---
 2 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index dd1d142..0ad7318 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -44,6 +44,7 @@ struct rw_semaphore {
 };
 
 extern struct rw_semaphore *rwsem_down_read_failed(struct rw_semaphore *sem);
+extern struct rw_semaphore *rwsem_down_read_failed_killable(struct rw_semaphore *sem);
 extern struct rw_semaphore *rwsem_down_write_failed(struct rw_semaphore *sem);
 extern struct rw_semaphore *rwsem_down_write_failed_killable(struct rw_semaphore *sem);
 extern struct rw_semaphore *rwsem_wake(struct rw_semaphore *);
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 34e727f..02f6606 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -221,8 +221,8 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
 /*
  * Wait for the read lock to be granted
  */
-__visible
-struct rw_semaphore __sched *rwsem_down_read_failed(struct rw_semaphore *sem)
+static inline struct rw_semaphore __sched *
+__rwsem_down_read_failed_common(struct rw_semaphore *sem, int state)
 {
 	long count, adjustment = -RWSEM_ACTIVE_READ_BIAS;
 	struct rwsem_waiter waiter;
@@ -255,17 +255,44 @@ struct rw_semaphore __sched *rwsem_down_read_failed(struct rw_semaphore *sem)
 
 	/* wait to be given the lock */
 	while (true) {
-		set_current_state(TASK_UNINTERRUPTIBLE);
+		set_current_state(state);
 		if (!waiter.task)
 			break;
+		if (signal_pending_state(state, current)) {
+			raw_spin_lock_irq(&sem->wait_lock);
+			if (waiter.task)
+				goto out_nolock;
+			raw_spin_unlock_irq(&sem->wait_lock);
+			break;
+		}
 		schedule();
 	}
 
 	__set_current_state(TASK_RUNNING);
 	return sem;
+out_nolock:
+	list_del(&waiter.list);
+	if (list_empty(&sem->wait_list))
+		atomic_long_add(-RWSEM_WAITING_BIAS, &sem->count);
+	raw_spin_unlock_irq(&sem->wait_lock);
+	__set_current_state(TASK_RUNNING);
+	return ERR_PTR(-EINTR);
+}
+
+__visible struct rw_semaphore * __sched
+rwsem_down_read_failed(struct rw_semaphore *sem)
+{
+	return __rwsem_down_read_failed_common(sem, TASK_UNINTERRUPTIBLE);
 }
 EXPORT_SYMBOL(rwsem_down_read_failed);
 
+__visible struct rw_semaphore * __sched
+rwsem_down_read_failed_killable(struct rw_semaphore *sem)
+{
+	return __rwsem_down_read_failed_common(sem, TASK_KILLABLE);
+}
+EXPORT_SYMBOL(rwsem_down_read_failed_killable);
+
 /*
  * This function must be called with the sem->wait_lock held to prevent
  * race conditions between checking the rwsem wait list and setting the

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-08-10 12:15 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-19 18:02 [PATCH 0/7] rwsem: Implement down_read_killable() Kirill Tkhai
2017-06-19 18:02 ` [PATCH 1/7] rwsem-spinlock: Add killable versions of __down_read() Kirill Tkhai
2017-08-10 12:11   ` [tip:locking/core] locking/rwsem-spinlock: " tip-bot for Kirill Tkhai
2017-06-19 18:02 ` [PATCH 2/7] rwsem-spinlock: Add killable versions of rwsem_down_read_failed() Kirill Tkhai
2017-07-06  8:04   ` Peter Zijlstra
2017-07-06  9:45     ` Kirill Tkhai
2017-08-10 12:12   ` [tip:locking/core] locking/rwsem-xadd: " tip-bot for Kirill Tkhai
2017-06-19 18:02 ` [PATCH 3/7] alpha: Add __down_read_killable() Kirill Tkhai
2017-06-19 18:02 ` [PATCH 4/7] ia64: " Kirill Tkhai
2017-06-19 18:03 ` [PATCH 5/7] s390: " Kirill Tkhai
2017-06-19 18:03 ` [PATCH 6/7] x86: " Kirill Tkhai
2017-06-19 18:03 ` [PATCH 7/7] rwsem: Add down_read_killable() Kirill Tkhai
2017-06-19 20:27 ` [PATCH 0/7] rwsem: Implement down_read_killable() David Rientjes
2017-06-20  8:30 ` David Howells
2017-06-20 10:36   ` Kirill Tkhai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).