[RFC,14/15] epoll: support polling from userspace for ep_poll()
diff mbox series

Message ID 20190109164025.24554-15-rpenyaev@suse.de
State Superseded
Headers show
Series
  • epoll: support pollable epoll from userspace
Related show

Commit Message

Roman Penyaev Jan. 9, 2019, 4:40 p.m. UTC
When epfd is polled from userspace and user calls epoll_wait():

1. If user ring is not fully consumed (i.e. head != tail) returns
   -ESTALE, indicating that some actions on userside is required.

2. If events were routed to klists probably memory was expanded or
   shrink is still required.  Do shrink if needed and transfer all
   collected events from kernel lists to uring.

3. Ensure with WARN that ep_poll_send_events() can't be called from
   ep_poll() when epfd is pollable from userspace.

4. Wait for events on wait queue, always return -ESTALE if were
   awekened indicating that events have to be consumed from user ring.

Signed-off-by: Roman Penyaev <rpenyaev@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: Jason Baron <jbaron@akamai.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 fs/eventpoll.c | 46 +++++++++++++++++++++++++++++++++++++---------
 1 file changed, 37 insertions(+), 9 deletions(-)

Patch
diff mbox series

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 2b38a3d884e8..5de640fcf28b 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -523,7 +523,8 @@  static inline bool ep_user_ring_events_available(struct eventpoll *ep)
 static inline int ep_events_available(struct eventpoll *ep)
 {
 	return !list_empty_careful(&ep->rdllist) ||
-		READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR;
+		READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR ||
+		ep_user_ring_events_available(ep);
 }
 
 #ifdef CONFIG_NET_RX_BUSY_POLL
@@ -2411,6 +2412,8 @@  static int ep_send_events(struct eventpoll *ep,
 {
 	struct ep_send_events_data esed;
 
+	WARN_ON(ep_polled_by_user(ep));
+
 	esed.maxevents = maxevents;
 	esed.events = events;
 
@@ -2607,6 +2610,24 @@  static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 
 	lockdep_assert_irqs_enabled();
 
+	if (ep_polled_by_user(ep)) {
+		if (ep_user_ring_events_available(ep))
+			/* Firstly all events from ring have to be consumed */
+			return -ESTALE;
+
+		if (ep_events_routed_to_klists(ep)) {
+			res = ep_transfer_events_and_shrink_uring(ep);
+			if (unlikely(res < 0))
+				return res;
+			if (res)
+				/*
+				 * Events were transferred from klists to
+				 * user ring
+				 */
+				return -ESTALE;
+		}
+	}
+
 	if (timeout > 0) {
 		struct timespec64 end_time = ep_set_mstimeout(timeout);
 
@@ -2695,14 +2716,21 @@  static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 	__set_current_state(TASK_RUNNING);
 
 send_events:
-	/*
-	 * Try to transfer events to user space. In case we get 0 events and
-	 * there's still timeout left over, we go trying again in search of
-	 * more luck.
-	 */
-	if (!res && eavail &&
-	    !(res = ep_send_events(ep, events, maxevents)) && !timed_out)
-		goto fetch_events;
+	if (!res && eavail) {
+		if (!ep_polled_by_user(ep)) {
+			/*
+			 * Try to transfer events to user space. In case we get
+			 * 0 events and there's still timeout left over, we go
+			 * trying again in search of more luck.
+			 */
+			res = ep_send_events(ep, events, maxevents);
+			if (!res && !timed_out)
+				goto fetch_events;
+		} else {
+			/* User has to deal with the ring himself */
+			res = -ESTALE;
+		}
+	}
 
 	if (waiter) {
 		spin_lock_irq(&ep->wq.lock);