linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC 1/2] futex: Split key setup from key queue locking and read
@ 2019-07-30 22:06 Gabriel Krisman Bertazi
  2019-07-30 22:06 ` [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes Gabriel Krisman Bertazi
  2019-07-31 23:33 ` [PATCH RFC 1/2] futex: Split key setup from key queue locking and read Thomas Gleixner
  0 siblings, 2 replies; 16+ messages in thread
From: Gabriel Krisman Bertazi @ 2019-07-30 22:06 UTC (permalink / raw)
  To: tglx; +Cc: mingo, peterz, dvhart, linux-kernel, Gabriel Krisman Bertazi, kernel

split the futex key setup from the queue locking and key reading.  This
is useful to support the setup of multiple keys at the same time, like
what is done in futex_requeue() and what will be done for the
FUTEX_WAIT_MULTIPLE command.

Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com>
---
 kernel/futex.c | 71 +++++++++++++++++++++++++++++---------------------
 1 file changed, 42 insertions(+), 29 deletions(-)

diff --git a/kernel/futex.c b/kernel/futex.c
index 6d50728ef2e7..91f3db335c57 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -2631,6 +2631,39 @@ static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
 	__set_current_state(TASK_RUNNING);
 }
 
+static int __futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
+			      struct futex_q *q, struct futex_hash_bucket **hb)
+{
+
+	u32 uval;
+	int ret;
+
+retry_private:
+	*hb = queue_lock(q);
+
+	ret = get_futex_value_locked(&uval, uaddr);
+
+	if (ret) {
+		queue_unlock(*hb);
+
+		ret = get_user(uval, uaddr);
+		if (ret)
+			return ret;
+
+		if (!(flags & FLAGS_SHARED))
+			goto retry_private;
+
+		return 1;
+	}
+
+	if (uval != val) {
+		queue_unlock(*hb);
+		ret = -EWOULDBLOCK;
+	}
+
+	return ret;
+}
+
 /**
  * futex_wait_setup() - Prepare to wait on a futex
  * @uaddr:	the futex userspace address
@@ -2651,7 +2684,6 @@ static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
 static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
 			   struct futex_q *q, struct futex_hash_bucket **hb)
 {
-	u32 uval;
 	int ret;
 
 	/*
@@ -2672,38 +2704,19 @@ static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
 	 * absorb a wakeup if *uaddr does not match the desired values
 	 * while the syscall executes.
 	 */
-retry:
-	ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &q->key, FUTEX_READ);
-	if (unlikely(ret != 0))
-		return ret;
-
-retry_private:
-	*hb = queue_lock(q);
+	do {
+		ret = get_futex_key(uaddr, flags & FLAGS_SHARED,
+				    &q->key, FUTEX_READ);
+		if (unlikely(ret != 0))
+			return ret;
 
-	ret = get_futex_value_locked(&uval, uaddr);
+		ret = __futex_wait_setup(uaddr, val, flags, q, hb);
 
-	if (ret) {
-		queue_unlock(*hb);
-
-		ret = get_user(uval, uaddr);
+		/* Drop key reference if retry or error. */
 		if (ret)
-			goto out;
+			put_futex_key(&q->key);
+	} while (ret > 0);
 
-		if (!(flags & FLAGS_SHARED))
-			goto retry_private;
-
-		put_futex_key(&q->key);
-		goto retry;
-	}
-
-	if (uval != val) {
-		queue_unlock(*hb);
-		ret = -EWOULDBLOCK;
-	}
-
-out:
-	if (ret)
-		put_futex_key(&q->key);
 	return ret;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-07-30 22:06 [PATCH RFC 1/2] futex: Split key setup from key queue locking and read Gabriel Krisman Bertazi
@ 2019-07-30 22:06 ` Gabriel Krisman Bertazi
  2019-07-31 12:06   ` Peter Zijlstra
  2019-08-01  0:45   ` Thomas Gleixner
  2019-07-31 23:33 ` [PATCH RFC 1/2] futex: Split key setup from key queue locking and read Thomas Gleixner
  1 sibling, 2 replies; 16+ messages in thread
From: Gabriel Krisman Bertazi @ 2019-07-30 22:06 UTC (permalink / raw)
  To: tglx
  Cc: mingo, peterz, dvhart, linux-kernel, Gabriel Krisman Bertazi,
	kernel, Zebediah Figura, Steven Noonan, Pierre-Loup A . Griffais

This is a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows
a thread to wait on several futexes at the same time, and be awoken by
any of them.  In a sense, it implements one of the features that was
supported by pooling on the old FUTEX_FD interface.

My use case for this operation lies in Wine, where we want to implement
a similar interface available in Windows, used mainly for event
handling.  The wine folks have an implementation that uses eventfd, but
it suffers from FD exhaustion (I was told they have application that go
to the order of multi-milion FDs), and higher CPU utilization.

In time, we are also proposing modifications to glibc and libpthread to
make this feature available for Linux native multithreaded applications
using libpthread, which can benefit from the behavior of waiting on any
of a group of futexes.

In particular, using futexes in our Wine use case reduced the CPU
utilization by 4% for the game Beat Saber and by 1.5% for the game
Shadow of Tomb Raider, both running over Proton (a wine based solution
for Windows emulation), when compared to the eventfd interface. This
implementation also doesn't rely of file descriptors, so it doesn't risk
overflowing the resource.

Technically, the existing FUTEX_WAIT implementation can be easily
reworked by using do_futex_wait_multiple with a count of one, and I
have a patch showing how it works.  I'm not proposing it, since
futex is such a tricky code, that I'd be more confortable to have
FUTEX_WAIT_MULTIPLE running upstream for a couple development cycles,
before considering modifying FUTEX_WAIT.

From an implementation perspective, the futex list is passed as an array
of (pointer,value,bitset) to the kernel, which will enqueue all of them
and sleep if none was already triggered. It returns a hint of which
futex caused the wake up event to userspace, but the hint doesn't
guarantee that is the only futex triggered.  Before calling the syscall
again, userspace should traverse the list, trying to re-acquire any of
the other futexes, to prevent an immediate -EWOULDBLOCK return code from
the kernel.

This was tested using three mechanisms:

1) By reimplementing FUTEX_WAIT in terms of FUTEX_WAIT_MULTIPLE and
running the unmodified tools/testing/selftests/futex and a full linux
distro on top of this kernel.

2) By an example code that exercises the FUTEX_WAIT_MULTIPLE path on a
multi-threaded, event-handling setup.

3) By running the Wine fsync implementation and executing multi-threaded
applications, in particular the modern games mentioned above, on top of
this implementation.

Signed-off-by: Zebediah Figura <z.figura12@gmail.com>
Signed-off-by: Steven Noonan <steven@valvesoftware.com>
Signed-off-by: Pierre-Loup A. Griffais <pgriffais@valvesoftware.com>
Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com>
---
 include/uapi/linux/futex.h |   7 ++
 kernel/futex.c             | 161 ++++++++++++++++++++++++++++++++++++-
 2 files changed, 164 insertions(+), 4 deletions(-)

diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h
index a89eb0accd5e..2401c4cf5095 100644
--- a/include/uapi/linux/futex.h
+++ b/include/uapi/linux/futex.h
@@ -21,6 +21,7 @@
 #define FUTEX_WAKE_BITSET	10
 #define FUTEX_WAIT_REQUEUE_PI	11
 #define FUTEX_CMP_REQUEUE_PI	12
+#define FUTEX_WAIT_MULTIPLE	13
 
 #define FUTEX_PRIVATE_FLAG	128
 #define FUTEX_CLOCK_REALTIME	256
@@ -150,4 +151,10 @@ struct robust_list_head {
   (((op & 0xf) << 28) | ((cmp & 0xf) << 24)		\
    | ((oparg & 0xfff) << 12) | (cmparg & 0xfff))
 
+struct futex_wait_block {
+	__u32 __user *uaddr;
+	__u32 val;
+	__u32 bitset;
+};
+
 #endif /* _UAPI_LINUX_FUTEX_H */
diff --git a/kernel/futex.c b/kernel/futex.c
index 91f3db335c57..2623e8f152cd 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -183,6 +183,7 @@ static int  __read_mostly futex_cmpxchg_enabled;
 #endif
 #define FLAGS_CLOCKRT		0x02
 #define FLAGS_HAS_TIMEOUT	0x04
+#define FLAGS_WAKE_MULTIPLE	0x08
 
 /*
  * Priority Inheritance state:
@@ -2720,6 +2721,150 @@ static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
 	return ret;
 }
 
+static int do_futex_wait_multiple(struct futex_wait_block *wb,
+				  u32 count, unsigned int flags,
+				  ktime_t *abs_time)
+{
+
+	struct hrtimer_sleeper timeout, *to;
+	struct futex_hash_bucket *hb;
+	struct futex_q *qs = NULL;
+	int ret;
+	int i;
+
+	qs = kcalloc(count, sizeof(struct futex_q), GFP_KERNEL);
+	if (!qs)
+		return -ENOMEM;
+
+	to = futex_setup_timer(abs_time, &timeout, flags,
+			       current->timer_slack_ns);
+ retry:
+	for (i = 0; i < count; i++) {
+		qs[i].key = FUTEX_KEY_INIT;
+		qs[i].bitset = wb[i].bitset;
+
+		ret = get_futex_key(wb[i].uaddr, flags & FLAGS_SHARED,
+				    &qs[i].key, FUTEX_READ);
+		if (unlikely(ret != 0)) {
+			for (--i; i >= 0; i--)
+				put_futex_key(&qs[i].key);
+			goto out;
+		}
+	}
+
+	set_current_state(TASK_INTERRUPTIBLE);
+
+	for (i = 0; i < count; i++) {
+		ret = __futex_wait_setup(wb[i].uaddr, wb[i].val,
+					 flags, &qs[i], &hb);
+		if (ret) {
+			/* Drop the failed key directly.  keys 0..(i-1)
+			 * will be put by unqueue_me.
+			 */
+			put_futex_key(&qs[i].key);
+
+			/* Undo the partial work we did. */
+			for (--i; i >= 0; i--)
+				unqueue_me(&qs[i]);
+
+			__set_current_state(TASK_RUNNING);
+			if (ret > 0)
+				goto retry;
+			goto out;
+		}
+
+		/* We can't hold to the bucket lock when dealing with
+		 * the next futex. Queue ourselves now so we can unlock
+		 * it before moving on.
+		 */
+		queue_me(&qs[i], hb);
+	}
+
+	if (to)
+		hrtimer_start_expires(&to->timer, HRTIMER_MODE_ABS);
+
+	/* There is no easy to way to check if we are wake already on
+	 * multiple futexes without waking through each one of them.  So
+	 * just sleep and let the scheduler handle it.
+	 */
+	if (!to || to->task)
+		freezable_schedule();
+
+	__set_current_state(TASK_RUNNING);
+
+	ret = -ETIMEDOUT;
+	/* If we were woken (and unqueued), we succeeded. */
+	for (i = 0; i < count; i++)
+		if (!unqueue_me(&qs[i]))
+			ret = i;
+
+	/* Succeed wakeup */
+	if (ret >= 0)
+		goto out;
+
+	/* Woken by triggered timeout */
+	if (to && !to->task)
+		goto out;
+
+	/*
+	 * We expect signal_pending(current), but we might be the
+	 * victim of a spurious wakeup as well.
+	 */
+	if (!signal_pending(current))
+		goto retry;
+
+	ret = -ERESTARTSYS;
+	if (!abs_time)
+		goto out;
+
+	ret = -ERESTART_RESTARTBLOCK;
+ out:
+	if (to) {
+		hrtimer_cancel(&to->timer);
+		destroy_hrtimer_on_stack(&to->timer);
+	}
+
+	kfree(qs);
+	return ret;
+}
+
+static int futex_wait_multiple(u32 __user *uaddr, unsigned int flags,
+			       u32 count, ktime_t *abs_time)
+{
+	struct futex_wait_block *wb;
+	struct restart_block *restart;
+	int ret;
+
+	if (!count)
+		return -EINVAL;
+
+	wb = kcalloc(count, sizeof(struct futex_wait_block), GFP_KERNEL);
+	if (!wb)
+		return -ENOMEM;
+
+	if (copy_from_user(wb, uaddr,
+			   count * sizeof(struct futex_wait_block))) {
+		ret = -EFAULT;
+		goto out;
+	}
+
+	ret = do_futex_wait_multiple(wb, count, flags, abs_time);
+
+	if (ret == -ERESTART_RESTARTBLOCK) {
+		restart = &current->restart_block;
+		restart->fn = futex_wait_restart;
+		restart->futex.uaddr = uaddr;
+		restart->futex.val = count;
+		restart->futex.time = *abs_time;
+		restart->futex.flags = (flags | FLAGS_HAS_TIMEOUT |
+					FLAGS_WAKE_MULTIPLE);
+	}
+
+out:
+	kfree(wb);
+	return ret;
+}
+
 static int futex_wait(u32 __user *uaddr, unsigned int flags, u32 val,
 		      ktime_t *abs_time, u32 bitset)
 {
@@ -2797,6 +2942,10 @@ static long futex_wait_restart(struct restart_block *restart)
 	}
 	restart->fn = do_no_restart_syscall;
 
+	if (restart->futex.flags & FLAGS_WAKE_MULTIPLE)
+		return (long)futex_wait_multiple(uaddr, restart->futex.flags,
+						 restart->futex.val, tp);
+
 	return (long)futex_wait(uaddr, restart->futex.flags,
 				restart->futex.val, tp, restart->futex.bitset);
 }
@@ -3680,6 +3829,8 @@ long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
 					     uaddr2);
 	case FUTEX_CMP_REQUEUE_PI:
 		return futex_requeue(uaddr, flags, uaddr2, val, val2, &val3, 1);
+	case FUTEX_WAIT_MULTIPLE:
+		return futex_wait_multiple(uaddr, flags, val, timeout);
 	}
 	return -ENOSYS;
 }
@@ -3696,7 +3847,8 @@ SYSCALL_DEFINE6(futex, u32 __user *, uaddr, int, op, u32, val,
 
 	if (utime && (cmd == FUTEX_WAIT || cmd == FUTEX_LOCK_PI ||
 		      cmd == FUTEX_WAIT_BITSET ||
-		      cmd == FUTEX_WAIT_REQUEUE_PI)) {
+		      cmd == FUTEX_WAIT_REQUEUE_PI ||
+		      cmd == FUTEX_WAIT_MULTIPLE)) {
 		if (unlikely(should_fail_futex(!(op & FUTEX_PRIVATE_FLAG))))
 			return -EFAULT;
 		if (get_timespec64(&ts, utime))
@@ -3705,7 +3857,7 @@ SYSCALL_DEFINE6(futex, u32 __user *, uaddr, int, op, u32, val,
 			return -EINVAL;
 
 		t = timespec64_to_ktime(ts);
-		if (cmd == FUTEX_WAIT)
+		if (cmd == FUTEX_WAIT || cmd == FUTEX_WAIT_MULTIPLE)
 			t = ktime_add_safe(ktime_get(), t);
 		tp = &t;
 	}
@@ -3889,14 +4041,15 @@ SYSCALL_DEFINE6(futex_time32, u32 __user *, uaddr, int, op, u32, val,
 
 	if (utime && (cmd == FUTEX_WAIT || cmd == FUTEX_LOCK_PI ||
 		      cmd == FUTEX_WAIT_BITSET ||
-		      cmd == FUTEX_WAIT_REQUEUE_PI)) {
+		      cmd == FUTEX_WAIT_REQUEUE_PI ||
+		      cmd == FUTEX_WAIT_MULTIPLE)) {
 		if (get_old_timespec32(&ts, utime))
 			return -EFAULT;
 		if (!timespec64_valid(&ts))
 			return -EINVAL;
 
 		t = timespec64_to_ktime(ts);
-		if (cmd == FUTEX_WAIT)
+		if (cmd == FUTEX_WAIT || cmd == FUTEX_WAIT_MULTIPLE)
 			t = ktime_add_safe(ktime_get(), t);
 		tp = &t;
 	}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-07-30 22:06 ` [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes Gabriel Krisman Bertazi
@ 2019-07-31 12:06   ` Peter Zijlstra
  2019-07-31 15:15     ` Zebediah Figura
  2019-08-06  6:26     ` Gabriel Krisman Bertazi
  2019-08-01  0:45   ` Thomas Gleixner
  1 sibling, 2 replies; 16+ messages in thread
From: Peter Zijlstra @ 2019-07-31 12:06 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi
  Cc: tglx, mingo, dvhart, linux-kernel, kernel, Zebediah Figura,
	Steven Noonan, Pierre-Loup A . Griffais, viro, jannh

On Tue, Jul 30, 2019 at 06:06:02PM -0400, Gabriel Krisman Bertazi wrote:
> This is a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows
> a thread to wait on several futexes at the same time, and be awoken by
> any of them.  In a sense, it implements one of the features that was
> supported by pooling on the old FUTEX_FD interface.
> 
> My use case for this operation lies in Wine, where we want to implement
> a similar interface available in Windows, used mainly for event
> handling.  The wine folks have an implementation that uses eventfd, but
> it suffers from FD exhaustion (I was told they have application that go
> to the order of multi-milion FDs), and higher CPU utilization.

So is multi-million the range we expect for @count ?

If so, we're having problems, see below.

> In time, we are also proposing modifications to glibc and libpthread to
> make this feature available for Linux native multithreaded applications
> using libpthread, which can benefit from the behavior of waiting on any
> of a group of futexes.
> 
> In particular, using futexes in our Wine use case reduced the CPU
> utilization by 4% for the game Beat Saber and by 1.5% for the game
> Shadow of Tomb Raider, both running over Proton (a wine based solution
> for Windows emulation), when compared to the eventfd interface. This
> implementation also doesn't rely of file descriptors, so it doesn't risk
> overflowing the resource.
> 
> Technically, the existing FUTEX_WAIT implementation can be easily
> reworked by using do_futex_wait_multiple with a count of one, and I
> have a patch showing how it works.  I'm not proposing it, since
> futex is such a tricky code, that I'd be more confortable to have
> FUTEX_WAIT_MULTIPLE running upstream for a couple development cycles,
> before considering modifying FUTEX_WAIT.
> 
> From an implementation perspective, the futex list is passed as an array
> of (pointer,value,bitset) to the kernel, which will enqueue all of them
> and sleep if none was already triggered. It returns a hint of which
> futex caused the wake up event to userspace, but the hint doesn't
> guarantee that is the only futex triggered.  Before calling the syscall
> again, userspace should traverse the list, trying to re-acquire any of
> the other futexes, to prevent an immediate -EWOULDBLOCK return code from
> the kernel.

> Signed-off-by: Zebediah Figura <z.figura12@gmail.com>
> Signed-off-by: Steven Noonan <steven@valvesoftware.com>
> Signed-off-by: Pierre-Loup A. Griffais <pgriffais@valvesoftware.com>
> Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com>

That is not a valid SoB chain.

> ---
>  include/uapi/linux/futex.h |   7 ++
>  kernel/futex.c             | 161 ++++++++++++++++++++++++++++++++++++-
>  2 files changed, 164 insertions(+), 4 deletions(-)
> 
> diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h
> index a89eb0accd5e..2401c4cf5095 100644
> --- a/include/uapi/linux/futex.h
> +++ b/include/uapi/linux/futex.h

> @@ -150,4 +151,10 @@ struct robust_list_head {
>    (((op & 0xf) << 28) | ((cmp & 0xf) << 24)		\
>     | ((oparg & 0xfff) << 12) | (cmparg & 0xfff))
>  
> +struct futex_wait_block {
> +	__u32 __user *uaddr;
> +	__u32 val;
> +	__u32 bitset;
> +};

That is not compat invariant and I see a distinct lack of compat code in
this patch.

> diff --git a/kernel/futex.c b/kernel/futex.c
> index 91f3db335c57..2623e8f152cd 100644
> --- a/kernel/futex.c
> +++ b/kernel/futex.c

no function comment in sight

> +static int do_futex_wait_multiple(struct futex_wait_block *wb,
> +				  u32 count, unsigned int flags,
> +				  ktime_t *abs_time)
> +{
> +

(spurious empty line)

> +	struct hrtimer_sleeper timeout, *to;
> +	struct futex_hash_bucket *hb;
> +	struct futex_q *qs = NULL;
> +	int ret;
> +	int i;
> +
> +	qs = kcalloc(count, sizeof(struct futex_q), GFP_KERNEL);
> +	if (!qs)
> +		return -ENOMEM;

This will not work for @count ~ 1e6, or rather, MAX_ORDER is 11, so we
can, at most, allocate 4096 << 11 bytes, and since sizeof(futex_q) ==
112, that gives: ~75k objects.

Also; this is the only actual limit placed on @count.

Jann, Al, this also allows a single task to increment i_count or
mm_count by ~75k, which might be really awesome for refcount smashing
attacks.

> +
> +	to = futex_setup_timer(abs_time, &timeout, flags,
> +			       current->timer_slack_ns);
> + retry:

(wrongly indented label)

> +	for (i = 0; i < count; i++) {
> +		qs[i].key = FUTEX_KEY_INIT;
> +		qs[i].bitset = wb[i].bitset;
> +
> +		ret = get_futex_key(wb[i].uaddr, flags & FLAGS_SHARED,
> +				    &qs[i].key, FUTEX_READ);
> +		if (unlikely(ret != 0)) {
> +			for (--i; i >= 0; i--)
> +				put_futex_key(&qs[i].key);
> +			goto out;
> +		}
> +	}
> +
> +	set_current_state(TASK_INTERRUPTIBLE);
> +
> +	for (i = 0; i < count; i++) {
> +		ret = __futex_wait_setup(wb[i].uaddr, wb[i].val,
> +					 flags, &qs[i], &hb);
> +		if (ret) {
> +			/* Drop the failed key directly.  keys 0..(i-1)
> +			 * will be put by unqueue_me.
> +			 */

(broken comment style)

> +			put_futex_key(&qs[i].key);
> +
> +			/* Undo the partial work we did. */
> +			for (--i; i >= 0; i--)
> +				unqueue_me(&qs[i]);
> +
> +			__set_current_state(TASK_RUNNING);
> +			if (ret > 0)
> +				goto retry;
> +			goto out;
> +		}
> +
> +		/* We can't hold to the bucket lock when dealing with
> +		 * the next futex. Queue ourselves now so we can unlock
> +		 * it before moving on.
> +		 */

(broken comment style)

> +		queue_me(&qs[i], hb);
> +	}
> +
> +	if (to)
> +		hrtimer_start_expires(&to->timer, HRTIMER_MODE_ABS);
> +
> +	/* There is no easy to way to check if we are wake already on
> +	 * multiple futexes without waking through each one of them.  So
> +	 * just sleep and let the scheduler handle it.
> +	 */

(broken comment style)

> +	if (!to || to->task)
> +		freezable_schedule();
> +
> +	__set_current_state(TASK_RUNNING);
> +
> +	ret = -ETIMEDOUT;
> +	/* If we were woken (and unqueued), we succeeded. */
> +	for (i = 0; i < count; i++)
> +		if (!unqueue_me(&qs[i]))
> +			ret = i;

(missing {})

> +
> +	/* Succeed wakeup */
> +	if (ret >= 0)
> +		goto out;
> +
> +	/* Woken by triggered timeout */
> +	if (to && !to->task)
> +		goto out;
> +
> +	/*
> +	 * We expect signal_pending(current), but we might be the
> +	 * victim of a spurious wakeup as well.
> +	 */

(curiously correct comment style -- which makes the patch
self-inconsistent)

> +	if (!signal_pending(current))
> +		goto retry;

I think that if you invest in a few helper functions; the above can be
reduced and written more like a normal wait loop.

> +
> +	ret = -ERESTARTSYS;
> +	if (!abs_time)
> +		goto out;
> +
> +	ret = -ERESTART_RESTARTBLOCK;
> + out:

(wrong label indent)

> +	if (to) {
> +		hrtimer_cancel(&to->timer);
> +		destroy_hrtimer_on_stack(&to->timer);
> +	}
> +
> +	kfree(qs);
> +	return ret;
> +}
> +

distinct lack of function comments

> +static int futex_wait_multiple(u32 __user *uaddr, unsigned int flags,
> +			       u32 count, ktime_t *abs_time)
> +{
> +	struct futex_wait_block *wb;
> +	struct restart_block *restart;
> +	int ret;
> +
> +	if (!count)
> +		return -EINVAL;
> +
> +	wb = kcalloc(count, sizeof(struct futex_wait_block), GFP_KERNEL);
> +	if (!wb)
> +		return -ENOMEM;
> +
> +	if (copy_from_user(wb, uaddr,
> +			   count * sizeof(struct futex_wait_block))) {
> +		ret = -EFAULT;
> +		goto out;
> +	}

I'm thinking we can do away with this giant copy and do it one at a time
from the other function, just extend the storage allocated there to
store whatever values are still required later.

Do we want to impose alignment constraints on uaddr?

> +	ret = do_futex_wait_multiple(wb, count, flags, abs_time);
> +
> +	if (ret == -ERESTART_RESTARTBLOCK) {
> +		restart = &current->restart_block;
> +		restart->fn = futex_wait_restart;
> +		restart->futex.uaddr = uaddr;
> +		restart->futex.val = count;
> +		restart->futex.time = *abs_time;
> +		restart->futex.flags = (flags | FLAGS_HAS_TIMEOUT |
> +					FLAGS_WAKE_MULTIPLE);
> +	}
> +
> +out:

(inconsistent correctly indented label)

> +	kfree(wb);
> +	return ret;
> +}

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-07-31 12:06   ` Peter Zijlstra
@ 2019-07-31 15:15     ` Zebediah Figura
  2019-07-31 22:39       ` Thomas Gleixner
  2019-08-06  6:26     ` Gabriel Krisman Bertazi
  1 sibling, 1 reply; 16+ messages in thread
From: Zebediah Figura @ 2019-07-31 15:15 UTC (permalink / raw)
  To: Peter Zijlstra, Gabriel Krisman Bertazi
  Cc: tglx, mingo, dvhart, linux-kernel, kernel, Steven Noonan,
	Pierre-Loup A . Griffais, viro, jannh

On 7/31/19 7:06 AM, Peter Zijlstra wrote:
> On Tue, Jul 30, 2019 at 06:06:02PM -0400, Gabriel Krisman Bertazi wrote:
>> This is a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows
>> a thread to wait on several futexes at the same time, and be awoken by
>> any of them.  In a sense, it implements one of the features that was
>> supported by pooling on the old FUTEX_FD interface.
>>
>> My use case for this operation lies in Wine, where we want to implement
>> a similar interface available in Windows, used mainly for event
>> handling.  The wine folks have an implementation that uses eventfd, but
>> it suffers from FD exhaustion (I was told they have application that go
>> to the order of multi-milion FDs), and higher CPU utilization.
> 
> So is multi-million the range we expect for @count ?
> 

Not in Wine's case; in fact Wine has a hard limit of 64 synchronization 
primitives that can be waited on at once (which, with the current 
user-side code, translates into 65 futexes). The exhaustion just had to 
do with the number of primitives created; some programs seem to leak 
them badly.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-07-31 15:15     ` Zebediah Figura
@ 2019-07-31 22:39       ` Thomas Gleixner
  2019-07-31 23:02         ` Zebediah Figura
  0 siblings, 1 reply; 16+ messages in thread
From: Thomas Gleixner @ 2019-07-31 22:39 UTC (permalink / raw)
  To: Zebediah Figura
  Cc: Peter Zijlstra, Gabriel Krisman Bertazi, mingo, dvhart,
	linux-kernel, kernel, Steven Noonan, Pierre-Loup A . Griffais,
	viro, jannh

On Wed, 31 Jul 2019, Zebediah Figura wrote:
> On 7/31/19 7:06 AM, Peter Zijlstra wrote:
> > On Tue, Jul 30, 2019 at 06:06:02PM -0400, Gabriel Krisman Bertazi wrote:
> > > This is a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows
> > > a thread to wait on several futexes at the same time, and be awoken by
> > > any of them.  In a sense, it implements one of the features that was
> > > supported by pooling on the old FUTEX_FD interface.
> > > 
> > > My use case for this operation lies in Wine, where we want to implement
> > > a similar interface available in Windows, used mainly for event
> > > handling.  The wine folks have an implementation that uses eventfd, but
> > > it suffers from FD exhaustion (I was told they have application that go
> > > to the order of multi-milion FDs), and higher CPU utilization.
> > 
> > So is multi-million the range we expect for @count ?
> > 
> 
> Not in Wine's case; in fact Wine has a hard limit of 64 synchronization
> primitives that can be waited on at once (which, with the current user-side
> code, translates into 65 futexes). The exhaustion just had to do with the
> number of primitives created; some programs seem to leak them badly.

And how is the futex approach better suited to 'fix' resource leaks?

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-07-31 22:39       ` Thomas Gleixner
@ 2019-07-31 23:02         ` Zebediah Figura
  0 siblings, 0 replies; 16+ messages in thread
From: Zebediah Figura @ 2019-07-31 23:02 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Peter Zijlstra, Gabriel Krisman Bertazi, mingo, dvhart,
	linux-kernel, kernel, Steven Noonan, Pierre-Loup A . Griffais,
	viro, jannh

On 7/31/19 5:39 PM, Thomas Gleixner wrote:
> On Wed, 31 Jul 2019, Zebediah Figura wrote:
>> On 7/31/19 7:06 AM, Peter Zijlstra wrote:
>>> On Tue, Jul 30, 2019 at 06:06:02PM -0400, Gabriel Krisman Bertazi wrote:
>>>> This is a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows
>>>> a thread to wait on several futexes at the same time, and be awoken by
>>>> any of them.  In a sense, it implements one of the features that was
>>>> supported by pooling on the old FUTEX_FD interface.
>>>>
>>>> My use case for this operation lies in Wine, where we want to implement
>>>> a similar interface available in Windows, used mainly for event
>>>> handling.  The wine folks have an implementation that uses eventfd, but
>>>> it suffers from FD exhaustion (I was told they have application that go
>>>> to the order of multi-milion FDs), and higher CPU utilization.
>>>
>>> So is multi-million the range we expect for @count ?
>>>
>>
>> Not in Wine's case; in fact Wine has a hard limit of 64 synchronization
>> primitives that can be waited on at once (which, with the current user-side
>> code, translates into 65 futexes). The exhaustion just had to do with the
>> number of primitives created; some programs seem to leak them badly.
> 
> And how is the futex approach better suited to 'fix' resource leaks?
> 

The crucial constraints for implementing Windows synchronization 
primitives in Wine are that (a) it must be possible to access them from 
multiple processes and (b) it must be possible to wait on more than one 
at a time.

The current best solution for this, performance-wise, backs each Windows 
synchronization primitive with an eventfd(2) descriptor and uses poll(2) 
to select on them. Windows programs can create an apparently unbounded 
number of synchronization objects, though they can only wait on up to 64 
at a time. However, on Linux the NOFILE limit causes problems; some 
distributions have it as low as 4096 by default, which is too low even 
for some modern programs that don't leak objects.

The approach we are developing, that relies on this patch, backs each 
object with a single futex whose value represents its signaled state. 
Therefore the only resource we are at risk of running out of is 
available memory, which exists in far greater quantities than available 
descriptors. [Presumably Windows synchronization primitives require at 
least some kernel memory to be allocated per object as well, so this 
puts us essentially at parity, for whatever that's worth.]

To be clear, I think the primary impetus for developing the futex-based 
approach was performance; it lets us avoid some system calls in hot 
paths (e.g. waiting on an already signaled object, resetting the state 
of an object to unsignaled. In that respect we're trying to get ahead of 
Windows, I guess.) But we have still been encountering occasional grief 
due to NOFILE limits that are too low, so this is another helpful benefit.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 1/2] futex: Split key setup from key queue locking and read
  2019-07-30 22:06 [PATCH RFC 1/2] futex: Split key setup from key queue locking and read Gabriel Krisman Bertazi
  2019-07-30 22:06 ` [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes Gabriel Krisman Bertazi
@ 2019-07-31 23:33 ` Thomas Gleixner
  2019-08-01  0:07   ` Gabriel Krisman Bertazi
  1 sibling, 1 reply; 16+ messages in thread
From: Thomas Gleixner @ 2019-07-31 23:33 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi; +Cc: mingo, peterz, dvhart, linux-kernel, kernel

On Tue, 30 Jul 2019, Gabriel Krisman Bertazi wrote:

> split the futex key setup from the queue locking and key reading.  This
> is useful to support the setup of multiple keys at the same time, like
> what is done in futex_requeue() and what will be done for the

What has this to do with futex_requeue()? Absolutely nothing unleass you
can reused that code there, which I doubt.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 1/2] futex: Split key setup from key queue locking and read
  2019-07-31 23:33 ` [PATCH RFC 1/2] futex: Split key setup from key queue locking and read Thomas Gleixner
@ 2019-08-01  0:07   ` Gabriel Krisman Bertazi
  2019-08-01  0:22     ` Gabriel Krisman Bertazi
  2019-08-01  0:41     ` Thomas Gleixner
  0 siblings, 2 replies; 16+ messages in thread
From: Gabriel Krisman Bertazi @ 2019-08-01  0:07 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: mingo, peterz, dvhart, linux-kernel, kernel

Thomas Gleixner <tglx@linutronix.de> writes:

> On Tue, 30 Jul 2019, Gabriel Krisman Bertazi wrote:
>
>> split the futex key setup from the queue locking and key reading.  This
>> is useful to support the setup of multiple keys at the same time, like
>> what is done in futex_requeue() and what will be done for the
>
> What has this to do with futex_requeue()? Absolutely nothing unleass you
> can reused that code there, which I doubt.

futex_requeue is another place where more than one key is setup at a
time.  Just that.  I think it could be reused there, but this change is
out of scope for this patch.

-- 
Gabriel Krisman Bertazi

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 1/2] futex: Split key setup from key queue locking and read
  2019-08-01  0:07   ` Gabriel Krisman Bertazi
@ 2019-08-01  0:22     ` Gabriel Krisman Bertazi
  2019-08-01  0:41     ` Thomas Gleixner
  1 sibling, 0 replies; 16+ messages in thread
From: Gabriel Krisman Bertazi @ 2019-08-01  0:22 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: mingo, peterz, dvhart, linux-kernel, kernel

Gabriel Krisman Bertazi <krisman@collabora.com> writes:

> Thomas Gleixner <tglx@linutronix.de> writes:
>> What has this to do with futex_requeue()? Absolutely nothing unleass you
>> can reused that code there, which I doubt.
>
>I think it could be reused there

Though I admit to not having tried it out.  I suppose I can just drop
the reference from the commit message when I submit the new version.


-- 
Gabriel Krisman Bertazi

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 1/2] futex: Split key setup from key queue locking and read
  2019-08-01  0:07   ` Gabriel Krisman Bertazi
  2019-08-01  0:22     ` Gabriel Krisman Bertazi
@ 2019-08-01  0:41     ` Thomas Gleixner
  1 sibling, 0 replies; 16+ messages in thread
From: Thomas Gleixner @ 2019-08-01  0:41 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi; +Cc: mingo, peterz, dvhart, linux-kernel, kernel

On Wed, 31 Jul 2019, Gabriel Krisman Bertazi wrote:
> Thomas Gleixner <tglx@linutronix.de> writes:
> 
> > On Tue, 30 Jul 2019, Gabriel Krisman Bertazi wrote:
> >
> >> split the futex key setup from the queue locking and key reading.  This
> >> is useful to support the setup of multiple keys at the same time, like
> >> what is done in futex_requeue() and what will be done for the
> >
> > What has this to do with futex_requeue()? Absolutely nothing unleass you
> > can reused that code there, which I doubt.
> 
> futex_requeue is another place where more than one key is setup at a
> time.  Just that.  I think it could be reused there, but this change is
> out of scope for this patch.

No it can't. And if it could, then it would be definitely in scope of this
patch set to reuse functionality.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-07-30 22:06 ` [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes Gabriel Krisman Bertazi
  2019-07-31 12:06   ` Peter Zijlstra
@ 2019-08-01  0:45   ` Thomas Gleixner
  2019-08-01  1:22     ` Zebediah Figura
  1 sibling, 1 reply; 16+ messages in thread
From: Thomas Gleixner @ 2019-08-01  0:45 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi
  Cc: mingo, peterz, dvhart, linux-kernel, kernel, Zebediah Figura,
	Steven Noonan, Pierre-Loup A . Griffais

On Tue, 30 Jul 2019, Gabriel Krisman Bertazi wrote:
> + retry:
> +	for (i = 0; i < count; i++) {
> +		qs[i].key = FUTEX_KEY_INIT;
> +		qs[i].bitset = wb[i].bitset;
> +
> +		ret = get_futex_key(wb[i].uaddr, flags & FLAGS_SHARED,
> +				    &qs[i].key, FUTEX_READ);
> +		if (unlikely(ret != 0)) {
> +			for (--i; i >= 0; i--)
> +				put_futex_key(&qs[i].key);
> +			goto out;
> +		}
> +	}
> +
> +	set_current_state(TASK_INTERRUPTIBLE);
> +
> +	for (i = 0; i < count; i++) {
> +		ret = __futex_wait_setup(wb[i].uaddr, wb[i].val,
> +					 flags, &qs[i], &hb);
> +		if (ret) {
> +			/* Drop the failed key directly.  keys 0..(i-1)
> +			 * will be put by unqueue_me.
> +			 */
> +			put_futex_key(&qs[i].key);
> +
> +			/* Undo the partial work we did. */
> +			for (--i; i >= 0; i--)
> +				unqueue_me(&qs[i]);

That lacks a comment why this does not evaluate the return value of
unqueue_me(). If one of the already queued futexes got woken, then it's
debatable whether that counts as success or not. Whatever the decision is
it needs to be documented instead of documenting the obvious.

> +			__set_current_state(TASK_RUNNING);
> +			if (ret > 0)
> +				goto retry;
> +			goto out;
> +		}
> +
> +		/* We can't hold to the bucket lock when dealing with
> +		 * the next futex. Queue ourselves now so we can unlock
> +		 * it before moving on.
> +		 */
> +		queue_me(&qs[i], hb);
> +	}

Why  does this use two loops and two cleanup loops instead of doing
it in a single one?

> +
> +	/* There is no easy to way to check if we are wake already on
> +	 * multiple futexes without waking through each one of them.  So

waking?

> +	ret = -ERESTARTSYS;
> +	if (!abs_time)
> +		goto out;
> +
> +	ret = -ERESTART_RESTARTBLOCK;
> + out:

There is surely no more convoluted way to write this? That goto above is
just making my eyes bleed. Yes I know where you copied that from and then
removed everthing between the goto and the assignement.

     	ret = abs_time ? -ERESTART_RESTARTBLOCK : -ERESTARTSYS;

would be too simple to read.

Also this is a new interface and there is absolutely no reason to make it
misfeature compatible to FUTEX_WAIT even for the case that this might share
code. Supporting relative timeouts is wrong to begin with and for absolute
timeouts there is no reason to use restartblock.

> +static int futex_wait_multiple(u32 __user *uaddr, unsigned int flags,
> +			       u32 count, ktime_t *abs_time)
> +{
> +	struct futex_wait_block *wb;
> +	struct restart_block *restart;
> +	int ret;
> +
> +	if (!count)
> +		return -EINVAL;
> +
> +
> +	wb = kcalloc(count, sizeof(struct futex_wait_block), GFP_KERNEL);

There are a couple of wrongs here:

  - Lacks a sane upper limit for count. Relying on kcalloc() to fail is
    just wrong.
  
  - kcalloc() does a pointless zeroing for a buffer which is going to be
    overwritten anyway

  - sizeof(struct foo) instead of sizeof(*wb)

count * sizeof(*wb) must be calculated anyway because copy_from_user()
needs it. So using kcalloc() is pointless.

> +	if (!wb)
> +		return -ENOMEM;
> +
> +	if (copy_from_user(wb, uaddr,
> +			   count * sizeof(struct futex_wait_block))) {

And doing the size calculation only once spares that fugly line break.

But that's cosmetic. The way more serious issue is:

How is that supposed to work with compat tasks? 32bit and 64 bit apps do
not share the same pointer size and your UAPI struct is:

> +struct futex_wait_block {
> +	__u32 __user *uaddr;
> +	__u32 val;
> +	__u32 bitset;
> +};

A 64 bit kernel will happily read beyond the user buffer when called from a
32bit task and all struct members turn into garbage. That can be solved
with padding so pointer conversion can be avoided, but you have to be
careful about endianness.

Coming back to to allocation itself:

do_futex_wait_multiple() does another allocation depending on count. So you
call into the memory allocator twice in a row and on the way out you do the
same thing again with free.

Looking at the size constraints.

If I assume a maximum of 65 futexes which got mentioned in one of the
replies then this will allocate 7280 bytes alone for the futex_q array with
a stock debian config which has no debug options enabled which would bloat
the struct. Adding the futex_wait_block array into the same allocation
becomes larger than 8K which already exceeds thelimit of SLUB kmem
caches and forces the whole thing into the page allocator directly.

This sucks.

Also I'm confused about the 64 maximum resulting in 65 futexes comment in
one of the mails.

Can you please explain what you are trying to do exatly on the user space
side?

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-08-01  0:45   ` Thomas Gleixner
@ 2019-08-01  1:22     ` Zebediah Figura
  2019-08-01  1:32       ` Zebediah Figura
  0 siblings, 1 reply; 16+ messages in thread
From: Zebediah Figura @ 2019-08-01  1:22 UTC (permalink / raw)
  To: Thomas Gleixner, Gabriel Krisman Bertazi
  Cc: mingo, peterz, dvhart, linux-kernel, kernel, Steven Noonan,
	Pierre-Loup A . Griffais

On 7/31/19 7:45 PM, Thomas Gleixner wrote:
> If I assume a maximum of 65 futexes which got mentioned in one of the
> replies then this will allocate 7280 bytes alone for the futex_q array with
> a stock debian config which has no debug options enabled which would bloat
> the struct. Adding the futex_wait_block array into the same allocation
> becomes larger than 8K which already exceeds thelimit of SLUB kmem
> caches and forces the whole thing into the page allocator directly.
> 
> This sucks.
> 
> Also I'm confused about the 64 maximum resulting in 65 futexes comment in
> one of the mails.
> 
> Can you please explain what you are trying to do exatly on the user space
> side?

The extra futex comes from the fact that there are a couple of, as it 
were, out-of-band ways to wake up a thread on Windows. [Specifically, a 
thread can enter an "alertable" wait in which case it will be woken up 
by a request from another thread to execute an "asynchronous procedure 
call".] It's easiest for us to just add another futex to the list in 
that case.

I'd also point out, for whatever it's worth, that while 64 is a hard 
limit, real applications almost never go nearly that high. By far the 
most common number of primitives to select on is one. 
Performance-critical code never tends to wait on more than three. The 
most I've ever seen is twelve.

If you'd like to see the user-side source, most of the relevant code is 
at [1], in particular the functions __fsync_wait_objects() [line 712] 
and do_single_wait [line 655]. Please feel free to ask for further 
clarification.

[1] 
https://github.com/ValveSoftware/wine/blob/proton_4.11/dlls/ntdll/fsync.c



> 
> Thanks,
> 
> 	tglx
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-08-01  1:22     ` Zebediah Figura
@ 2019-08-01  1:32       ` Zebediah Figura
  2019-08-01  1:42         ` Pierre-Loup A. Griffais
  0 siblings, 1 reply; 16+ messages in thread
From: Zebediah Figura @ 2019-08-01  1:32 UTC (permalink / raw)
  To: Thomas Gleixner, Gabriel Krisman Bertazi
  Cc: mingo, peterz, dvhart, linux-kernel, kernel, Steven Noonan,
	Pierre-Loup A . Griffais

On 7/31/19 8:22 PM, Zebediah Figura wrote:
> On 7/31/19 7:45 PM, Thomas Gleixner wrote:
>> If I assume a maximum of 65 futexes which got mentioned in one of the
>> replies then this will allocate 7280 bytes alone for the futex_q array with
>> a stock debian config which has no debug options enabled which would bloat
>> the struct. Adding the futex_wait_block array into the same allocation
>> becomes larger than 8K which already exceeds thelimit of SLUB kmem
>> caches and forces the whole thing into the page allocator directly.
>>
>> This sucks.
>>
>> Also I'm confused about the 64 maximum resulting in 65 futexes comment in
>> one of the mails.
>>
>> Can you please explain what you are trying to do exatly on the user space
>> side?
> 
> The extra futex comes from the fact that there are a couple of, as it
> were, out-of-band ways to wake up a thread on Windows. [Specifically, a
> thread can enter an "alertable" wait in which case it will be woken up
> by a request from another thread to execute an "asynchronous procedure
> call".] It's easiest for us to just add another futex to the list in
> that case.

To be clear, the 64/65 distinction is an implementation detail that's 
pretty much outside the scope of this discussion. I should have just 
said 65 directly. Sorry about that.

> 
> I'd also point out, for whatever it's worth, that while 64 is a hard
> limit, real applications almost never go nearly that high. By far the
> most common number of primitives to select on is one.
> Performance-critical code never tends to wait on more than three. The
> most I've ever seen is twelve.
> 
> If you'd like to see the user-side source, most of the relevant code is
> at [1], in particular the functions __fsync_wait_objects() [line 712]
> and do_single_wait [line 655]. Please feel free to ask for further
> clarification.
> 
> [1]
> https://github.com/ValveSoftware/wine/blob/proton_4.11/dlls/ntdll/fsync.c
> 
> 
> 
>>
>> Thanks,
>>
>> 	tglx
>>
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-08-01  1:32       ` Zebediah Figura
@ 2019-08-01  1:42         ` Pierre-Loup A. Griffais
  0 siblings, 0 replies; 16+ messages in thread
From: Pierre-Loup A. Griffais @ 2019-08-01  1:42 UTC (permalink / raw)
  To: Zebediah Figura, Thomas Gleixner, Gabriel Krisman Bertazi
  Cc: mingo, peterz, dvhart, linux-kernel, kernel, Steven Noonan

On 7/31/19 6:32 PM, Zebediah Figura wrote:
> On 7/31/19 8:22 PM, Zebediah Figura wrote:
>> On 7/31/19 7:45 PM, Thomas Gleixner wrote:
>>> If I assume a maximum of 65 futexes which got mentioned in one of the
>>> replies then this will allocate 7280 bytes alone for the futex_q 
>>> array with
>>> a stock debian config which has no debug options enabled which would 
>>> bloat
>>> the struct. Adding the futex_wait_block array into the same allocation
>>> becomes larger than 8K which already exceeds thelimit of SLUB kmem
>>> caches and forces the whole thing into the page allocator directly.
>>>
>>> This sucks.
>>>
>>> Also I'm confused about the 64 maximum resulting in 65 futexes 
>>> comment in
>>> one of the mails.
>>>
>>> Can you please explain what you are trying to do exatly on the user 
>>> space
>>> side?
>>
>> The extra futex comes from the fact that there are a couple of, as it
>> were, out-of-band ways to wake up a thread on Windows. [Specifically, a
>> thread can enter an "alertable" wait in which case it will be woken up
>> by a request from another thread to execute an "asynchronous procedure
>> call".] It's easiest for us to just add another futex to the list in
>> that case.
> 
> To be clear, the 64/65 distinction is an implementation detail that's 
> pretty much outside the scope of this discussion. I should have just 
> said 65 directly. Sorry about that.
> 
>>
>> I'd also point out, for whatever it's worth, that while 64 is a hard
>> limit, real applications almost never go nearly that high. By far the
>> most common number of primitives to select on is one.
>> Performance-critical code never tends to wait on more than three. The
>> most I've ever seen is twelve.
>>
>> If you'd like to see the user-side source, most of the relevant code is
>> at [1], in particular the functions __fsync_wait_objects() [line 712]
>> and do_single_wait [line 655]. Please feel free to ask for further
>> clarification.
>>
>> [1]
>> https://github.com/ValveSoftware/wine/blob/proton_4.11/dlls/ntdll/fsync.c

In addition, here's an example of how I think it might be useful to 
expose it to apps at large in a way that's compatible with existing 
pthread mutexes:

https://github.com/Plagman/glibc/commit/3b01145fa25987f2f93e7eda7f3e7d0f2f77b290

This patch hasn't received nearly as much testing as the Wine fsync code 
path, but that functionality would provide more CPU-efficient ways for 
thread pool code to sleep in our game engine. We also use eventfd today.

For this, I think the expected upper bound for the per-op futex count 
would be in the same order of magnitude as the logical CPU count on the 
target machine, similar as the Wine use-case.

Thanks,
  - Pierre-Loup

>>
>>
>>
>>>
>>> Thanks,
>>>
>>>     tglx
>>>
>>
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-07-31 12:06   ` Peter Zijlstra
  2019-07-31 15:15     ` Zebediah Figura
@ 2019-08-06  6:26     ` Gabriel Krisman Bertazi
  2019-08-06 10:13       ` Peter Zijlstra
  1 sibling, 1 reply; 16+ messages in thread
From: Gabriel Krisman Bertazi @ 2019-08-06  6:26 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tglx, mingo, dvhart, linux-kernel, kernel, Zebediah Figura,
	Steven Noonan, Pierre-Loup A . Griffais, viro, jannh

Peter Zijlstra <peterz@infradead.org> writes:

>
>> +static int futex_wait_multiple(u32 __user *uaddr, unsigned int flags,
>> +			       u32 count, ktime_t *abs_time)
>> +{
>> +	struct futex_wait_block *wb;
>> +	struct restart_block *restart;
>> +	int ret;
>> +
>> +	if (!count)
>> +		return -EINVAL;
>> +
>> +	wb = kcalloc(count, sizeof(struct futex_wait_block), GFP_KERNEL);
>> +	if (!wb)
>> +		return -ENOMEM;
>> +
>> +	if (copy_from_user(wb, uaddr,
>> +			   count * sizeof(struct futex_wait_block))) {
>> +		ret = -EFAULT;
>> +		goto out;
>> +	}
>
> I'm thinking we can do away with this giant copy and do it one at a time
> from the other function, just extend the storage allocated there to
> store whatever values are still required later.

Hey Peter,

Thanks for your very detailed review.  it is deeply appreciated.  My
apologies for the style issues, I blindly trusted checkpatch.pl, when it
said it was ready for submission.

I'm not sure I get the suggestion here.  If I understand the code
correctly, once we do it one at a time, we need to queue_me() each futex
and then drop the hb lock, before going to the next one.  Once we go to
the next one, we need to call get_user_pages (and now copy_from_user),
both of which can sleep, and on return set the task state to
TASK_RUNNING.  This opens a window where we can wake up the task but it
is not in the right sleeping state, which from the comment in
futex_wait_queue_me(), seems problematic.  This is also the reason why I
wanted to split the key memory pin from the actual read in patch 1/2.

Did you consider this problem or is it not a problem for some reason?
What am I missing?

-- 
Gabriel Krisman Bertazi

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes
  2019-08-06  6:26     ` Gabriel Krisman Bertazi
@ 2019-08-06 10:13       ` Peter Zijlstra
  0 siblings, 0 replies; 16+ messages in thread
From: Peter Zijlstra @ 2019-08-06 10:13 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi
  Cc: tglx, mingo, dvhart, linux-kernel, kernel, Zebediah Figura,
	Steven Noonan, Pierre-Loup A . Griffais, viro, jannh

On Tue, Aug 06, 2019 at 02:26:38AM -0400, Gabriel Krisman Bertazi wrote:
> Peter Zijlstra <peterz@infradead.org> writes:
> 
> >
> >> +static int futex_wait_multiple(u32 __user *uaddr, unsigned int flags,
> >> +			       u32 count, ktime_t *abs_time)
> >> +{
> >> +	struct futex_wait_block *wb;
> >> +	struct restart_block *restart;
> >> +	int ret;
> >> +
> >> +	if (!count)
> >> +		return -EINVAL;
> >> +
> >> +	wb = kcalloc(count, sizeof(struct futex_wait_block), GFP_KERNEL);
> >> +	if (!wb)
> >> +		return -ENOMEM;
> >> +
> >> +	if (copy_from_user(wb, uaddr,
> >> +			   count * sizeof(struct futex_wait_block))) {
> >> +		ret = -EFAULT;
> >> +		goto out;
> >> +	}
> >
> > I'm thinking we can do away with this giant copy and do it one at a time
> > from the other function, just extend the storage allocated there to
> > store whatever values are still required later.

> I'm not sure I get the suggestion here.  If I understand the code
> correctly, once we do it one at a time, we need to queue_me() each futex
> and then drop the hb lock, before going to the next one. 

So the idea is to reduce to a single allocation; like Thomas also said.
And afaict, we've not yet taken any locks the first time we iterate --
that only does get_futex_key(), the whole __futex_wait_setup() and
queue_me(), comes later, in the second iteration.




^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-08-06 10:14 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-30 22:06 [PATCH RFC 1/2] futex: Split key setup from key queue locking and read Gabriel Krisman Bertazi
2019-07-30 22:06 ` [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes Gabriel Krisman Bertazi
2019-07-31 12:06   ` Peter Zijlstra
2019-07-31 15:15     ` Zebediah Figura
2019-07-31 22:39       ` Thomas Gleixner
2019-07-31 23:02         ` Zebediah Figura
2019-08-06  6:26     ` Gabriel Krisman Bertazi
2019-08-06 10:13       ` Peter Zijlstra
2019-08-01  0:45   ` Thomas Gleixner
2019-08-01  1:22     ` Zebediah Figura
2019-08-01  1:32       ` Zebediah Figura
2019-08-01  1:42         ` Pierre-Loup A. Griffais
2019-07-31 23:33 ` [PATCH RFC 1/2] futex: Split key setup from key queue locking and read Thomas Gleixner
2019-08-01  0:07   ` Gabriel Krisman Bertazi
2019-08-01  0:22     ` Gabriel Krisman Bertazi
2019-08-01  0:41     ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).