All of lore.kernel.org
 help / color / mirror / Atom feed
* waitqueue lockdep annotation
@ 2017-11-30 14:20 Christoph Hellwig
  2017-11-30 14:20 ` [PATCH 1/2] sched/wait: assert the wait_queue_head lock is held in __wake_up_common Christoph Hellwig
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Christoph Hellwig @ 2017-11-30 14:20 UTC (permalink / raw)
  To: Ingo Molnar, eter Zijlstra
  Cc: Andrew Morton, Al Viro, linux-fsdevel, linux-kernel

Hi all,

this series adds a strategic lockdep_assert_held to __wake_up_common
to ensure callers really do hold the wait_queue_head lock when calling
the unlocked wake_up variants.  It turns out epoll did not do this
for a fairly common path (hit all the time by systemd during bootup),
so the second patch fixed this instance as well.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/2] sched/wait: assert the wait_queue_head lock is held in __wake_up_common
  2017-11-30 14:20 waitqueue lockdep annotation Christoph Hellwig
@ 2017-11-30 14:20 ` Christoph Hellwig
  2017-11-30 14:20 ` [PATCH 2/2] epoll: use proper wake_up variant in ep_poll_callback Christoph Hellwig
  2017-11-30 20:50 ` waitqueue lockdep annotation Andrew Morton
  2 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2017-11-30 14:20 UTC (permalink / raw)
  To: Ingo Molnar, eter Zijlstra
  Cc: Andrew Morton, Al Viro, linux-fsdevel, linux-kernel

Better ensure we actually hold the lock using lockdep than just commenting
on it.  Due to the various exported _locked interfaces it is far too easy
to get the locking wrong.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/sched/wait.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index 98feab7933c7..347c06c8222e 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -76,6 +76,8 @@ static int __wake_up_common(struct wait_queue_head *wq_head, unsigned int mode,
 	wait_queue_entry_t *curr, *next;
 	int cnt = 0;
 
+	lockdep_assert_held(&wq_head->lock);
+
 	if (bookmark && (bookmark->flags & WQ_FLAG_BOOKMARK)) {
 		curr = list_next_entry(bookmark, entry);
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/2] epoll: use proper wake_up variant in ep_poll_callback
  2017-11-30 14:20 waitqueue lockdep annotation Christoph Hellwig
  2017-11-30 14:20 ` [PATCH 1/2] sched/wait: assert the wait_queue_head lock is held in __wake_up_common Christoph Hellwig
@ 2017-11-30 14:20 ` Christoph Hellwig
  2017-11-30 20:50 ` waitqueue lockdep annotation Andrew Morton
  2 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2017-11-30 14:20 UTC (permalink / raw)
  To: Ingo Molnar, eter Zijlstra
  Cc: Andrew Morton, Al Viro, linux-fsdevel, linux-kernel

We do not hold ep->wq->lock, so we should not use wake_up_locked on it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/eventpoll.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index afd548ebc328..eaaa39616232 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -1190,7 +1190,7 @@ static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, v
 				break;
 			}
 		}
-		wake_up_locked(&ep->wq);
+		wake_up(&ep->wq);
 	}
 	if (waitqueue_active(&ep->poll_wait))
 		pwake++;
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-11-30 14:20 waitqueue lockdep annotation Christoph Hellwig
  2017-11-30 14:20 ` [PATCH 1/2] sched/wait: assert the wait_queue_head lock is held in __wake_up_common Christoph Hellwig
  2017-11-30 14:20 ` [PATCH 2/2] epoll: use proper wake_up variant in ep_poll_callback Christoph Hellwig
@ 2017-11-30 20:50 ` Andrew Morton
  2017-11-30 21:38   ` Jason Baron
  2 siblings, 1 reply; 16+ messages in thread
From: Andrew Morton @ 2017-11-30 20:50 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Ingo Molnar, eter Zijlstra, Al Viro, linux-fsdevel, linux-kernel

On Thu, 30 Nov 2017 06:20:35 -0800 Christoph Hellwig <hch@lst.de> wrote:

> Hi all,
> 
> this series adds a strategic lockdep_assert_held to __wake_up_common
> to ensure callers really do hold the wait_queue_head lock when calling
> the unlocked wake_up variants.  It turns out epoll did not do this
> for a fairly common path (hit all the time by systemd during bootup),
> so the second patch fixed this instance as well.

What are the runtime effects of the epoll bug?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-11-30 20:50 ` waitqueue lockdep annotation Andrew Morton
@ 2017-11-30 21:38   ` Jason Baron
  2017-11-30 22:11     ` Christoph Hellwig
  0 siblings, 1 reply; 16+ messages in thread
From: Jason Baron @ 2017-11-30 21:38 UTC (permalink / raw)
  To: Andrew Morton, Christoph Hellwig
  Cc: Ingo Molnar, eter Zijlstra, Al Viro, linux-fsdevel, linux-kernel



On 11/30/2017 03:50 PM, Andrew Morton wrote:
> On Thu, 30 Nov 2017 06:20:35 -0800 Christoph Hellwig <hch@lst.de> wrote:
> 
>> Hi all,
>>
>> this series adds a strategic lockdep_assert_held to __wake_up_common
>> to ensure callers really do hold the wait_queue_head lock when calling
>> the unlocked wake_up variants.  It turns out epoll did not do this
>> for a fairly common path (hit all the time by systemd during bootup),
>> so the second patch fixed this instance as well.
> 
> What are the runtime effects of the epoll bug?
> 

I don't think there is a bug here. The 'wake_up_locked()' calls in epoll
are being protected by the ep->lock, not the wait_queue_head lock. So
arguably the 'annotation' is wrong, but I don't think there is a bug
beyond that.

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-11-30 21:38   ` Jason Baron
@ 2017-11-30 22:11     ` Christoph Hellwig
  2017-11-30 22:18       ` Jason Baron
  0 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2017-11-30 22:11 UTC (permalink / raw)
  To: Jason Baron
  Cc: Andrew Morton, Christoph Hellwig, Ingo Molnar, Peter Zijlstra,
	Al Viro, linux-fsdevel, linux-kernel

On Thu, Nov 30, 2017 at 04:38:02PM -0500, Jason Baron wrote:
> I don't think there is a bug here. The 'wake_up_locked()' calls in epoll
> are being protected by the ep->lock, not the wait_queue_head lock. So
> arguably the 'annotation' is wrong, but I don't think there is a bug
> beyond that.

They can't be protected by ep->lock.  The file might as well be
watched for using poll or select as well, or just using epoll using
another epoll fd.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-11-30 22:11     ` Christoph Hellwig
@ 2017-11-30 22:18       ` Jason Baron
  2017-12-01 17:11         ` Christoph Hellwig
  0 siblings, 1 reply; 16+ messages in thread
From: Jason Baron @ 2017-11-30 22:18 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Ingo Molnar, Peter Zijlstra, Al Viro,
	linux-fsdevel, linux-kernel



On 11/30/2017 05:11 PM, Christoph Hellwig wrote:
> On Thu, Nov 30, 2017 at 04:38:02PM -0500, Jason Baron wrote:
>> I don't think there is a bug here. The 'wake_up_locked()' calls in epoll
>> are being protected by the ep->lock, not the wait_queue_head lock. So
>> arguably the 'annotation' is wrong, but I don't think there is a bug
>> beyond that.
> 
> They can't be protected by ep->lock.  The file might as well be
> watched for using poll or select as well, or just using epoll using
> another epoll fd.
> 

Yes, but for those cases it uses the ep->poll_wait waitqueue not the
ep->wq, which is guarded by the ep->wq->lock.

See the comments in 'struct eventpoll':

        /* Wait queue used by sys_epoll_wait() */


        wait_queue_head_t wq;





        /* Wait queue used by file->poll() */


        wait_queue_head_t poll_wait;

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-11-30 22:18       ` Jason Baron
@ 2017-12-01 17:11         ` Christoph Hellwig
  2017-12-01 19:00           ` Jason Baron
  0 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2017-12-01 17:11 UTC (permalink / raw)
  To: Jason Baron
  Cc: Christoph Hellwig, Andrew Morton, Ingo Molnar, Peter Zijlstra,
	Al Viro, linux-fsdevel, linux-kernel

On Thu, Nov 30, 2017 at 05:18:07PM -0500, Jason Baron wrote:
> Yes, but for those cases it uses the ep->poll_wait waitqueue not the
> ep->wq, which is guarded by the ep->wq->lock.

True.  So it looks like we have one waitqueue in the system that is
special in providing its own synchronization for waitqueues while
entirely ignoring the waitqueue code documentation that states that
waitqueues are internally synchronized.

We could drop the lockdep annotation, updated the documentation and
not add any validation of the assumptions, or just make epoll fit the
scheme used by everyone else.  So either we can drop these patches, or
I need to fix up more of the epoll code.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-12-01 17:11         ` Christoph Hellwig
@ 2017-12-01 19:00           ` Jason Baron
  2017-12-01 22:02             ` Christoph Hellwig
  0 siblings, 1 reply; 16+ messages in thread
From: Jason Baron @ 2017-12-01 19:00 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Ingo Molnar, Peter Zijlstra, Al Viro,
	linux-fsdevel, linux-kernel



On 12/01/2017 12:11 PM, Christoph Hellwig wrote:
> On Thu, Nov 30, 2017 at 05:18:07PM -0500, Jason Baron wrote:
>> Yes, but for those cases it uses the ep->poll_wait waitqueue not the
>> ep->wq, which is guarded by the ep->wq->lock.
> 
> True.  So it looks like we have one waitqueue in the system that is
> special in providing its own synchronization for waitqueues while
> entirely ignoring the waitqueue code documentation that states that
> waitqueues are internally synchronized.
> 
> We could drop the lockdep annotation, updated the documentation and
> not add any validation of the assumptions, or just make epoll fit the
> scheme used by everyone else.  So either we can drop these patches, or
> I need to fix up more of the epoll code.
> 

You could leave the annotation and do something like:
s/ep->lock/ep->wq->lock. And then that would remove the ep->lock saving
a bit of space.

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-12-01 19:00           ` Jason Baron
@ 2017-12-01 22:02             ` Christoph Hellwig
  2017-12-01 22:34               ` Jason Baron
  0 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2017-12-01 22:02 UTC (permalink / raw)
  To: Jason Baron
  Cc: Christoph Hellwig, Andrew Morton, Ingo Molnar, Peter Zijlstra,
	Al Viro, linux-fsdevel, linux-kernel

On Fri, Dec 01, 2017 at 02:00:33PM -0500, Jason Baron wrote:
> You could leave the annotation and do something like:
> s/ep->lock/ep->wq->lock. And then that would remove the ep->lock saving
> a bit of space.

Looks like this isn't going to work due to ep_poll_safewake taking
another waitqueue lock.  If we had a strict lock order it might work,
but the mess in ep_call_nested makes me fear it doesn't.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-12-01 22:02             ` Christoph Hellwig
@ 2017-12-01 22:34               ` Jason Baron
  2017-12-01 23:03                 ` Christoph Hellwig
  0 siblings, 1 reply; 16+ messages in thread
From: Jason Baron @ 2017-12-01 22:34 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Ingo Molnar, Peter Zijlstra, Al Viro,
	linux-fsdevel, linux-kernel

On 12/01/2017 05:02 PM, Christoph Hellwig wrote:
> On Fri, Dec 01, 2017 at 02:00:33PM -0500, Jason Baron wrote:
>> You could leave the annotation and do something like:
>> s/ep->lock/ep->wq->lock. And then that would remove the ep->lock saving
>> a bit of space.
> 
> Looks like this isn't going to work due to ep_poll_safewake taking
> another waitqueue lock.  If we had a strict lock order it might work,
> but the mess in ep_call_nested makes me fear it doesn't.
> 

hmmm...I'm not sure how this suggestion would change the locking rules
from what we currently have. Right now, we use ep->lock, if we remove
that and use ep->wq->lock instead, there is just a 1-to-1 mapping there
that has not changed, since ep->wq->lock currently is completely not
being used.

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-12-01 22:34               ` Jason Baron
@ 2017-12-01 23:03                 ` Christoph Hellwig
  2017-12-05 15:24                   ` Jason Baron
  0 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2017-12-01 23:03 UTC (permalink / raw)
  To: Jason Baron
  Cc: Christoph Hellwig, Andrew Morton, Ingo Molnar, Peter Zijlstra,
	Al Viro, linux-fsdevel, linux-kernel

On Fri, Dec 01, 2017 at 05:34:50PM -0500, Jason Baron wrote:
> hmmm...I'm not sure how this suggestion would change the locking rules
> from what we currently have. Right now, we use ep->lock, if we remove
> that and use ep->wq->lock instead, there is just a 1-to-1 mapping there
> that has not changed, since ep->wq->lock currently is completely not
> being used.

True.  The patch below survives the amazing complex booting and starting
systemd with lockdep enabled test.  Do we have something resembling a
epoll test suite?

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index afd548ebc328..2b2c5ac80e26 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -182,11 +182,10 @@ struct epitem {
  * This structure is stored inside the "private_data" member of the file
  * structure and represents the main data structure for the eventpoll
  * interface.
+ *
+ * Access to it is protected by the lock inside wq.
  */
 struct eventpoll {
-	/* Protect the access to this structure */
-	spinlock_t lock;
-
 	/*
 	 * This mutex is used to ensure that files are not removed
 	 * while epoll is using them. This is held during the event
@@ -210,7 +209,7 @@ struct eventpoll {
 	/*
 	 * This is a single linked list that chains all the "struct epitem" that
 	 * happened while transferring ready events to userspace w/out
-	 * holding ->lock.
+	 * holding ->wq.lock.
 	 */
 	struct epitem *ovflist;
 
@@ -686,17 +685,17 @@ static int ep_scan_ready_list(struct eventpoll *ep,
 	 * because we want the "sproc" callback to be able to do it
 	 * in a lockless way.
 	 */
-	spin_lock_irqsave(&ep->lock, flags);
+	spin_lock_irqsave(&ep->wq.lock, flags);
 	list_splice_init(&ep->rdllist, &txlist);
 	ep->ovflist = NULL;
-	spin_unlock_irqrestore(&ep->lock, flags);
+	spin_unlock_irqrestore(&ep->wq.lock, flags);
 
 	/*
 	 * Now call the callback function.
 	 */
 	error = (*sproc)(ep, &txlist, priv);
 
-	spin_lock_irqsave(&ep->lock, flags);
+	spin_lock_irqsave(&ep->wq.lock, flags);
 	/*
 	 * During the time we spent inside the "sproc" callback, some
 	 * other events might have been queued by the poll callback.
@@ -738,7 +737,7 @@ static int ep_scan_ready_list(struct eventpoll *ep,
 		if (waitqueue_active(&ep->poll_wait))
 			pwake++;
 	}
-	spin_unlock_irqrestore(&ep->lock, flags);
+	spin_unlock_irqrestore(&ep->wq.lock, flags);
 
 	if (!ep_locked)
 		mutex_unlock(&ep->mtx);
@@ -782,10 +781,10 @@ static int ep_remove(struct eventpoll *ep, struct epitem *epi)
 
 	rb_erase_cached(&epi->rbn, &ep->rbr);
 
-	spin_lock_irqsave(&ep->lock, flags);
+	spin_lock_irqsave(&ep->wq.lock, flags);
 	if (ep_is_linked(&epi->rdllink))
 		list_del_init(&epi->rdllink);
-	spin_unlock_irqrestore(&ep->lock, flags);
+	spin_unlock_irqrestore(&ep->wq.lock, flags);
 
 	wakeup_source_unregister(ep_wakeup_source(epi));
 	/*
@@ -1015,7 +1014,6 @@ static int ep_alloc(struct eventpoll **pep)
 	if (unlikely(!ep))
 		goto free_uid;
 
-	spin_lock_init(&ep->lock);
 	mutex_init(&ep->mtx);
 	init_waitqueue_head(&ep->wq);
 	init_waitqueue_head(&ep->poll_wait);
@@ -1119,7 +1117,7 @@ static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, v
 	struct eventpoll *ep = epi->ep;
 	int ewake = 0;
 
-	spin_lock_irqsave(&ep->lock, flags);
+	spin_lock_irqsave(&ep->wq.lock, flags);
 
 	ep_set_busy_poll_napi_id(epi);
 
@@ -1196,7 +1194,7 @@ static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, v
 		pwake++;
 
 out_unlock:
-	spin_unlock_irqrestore(&ep->lock, flags);
+	spin_unlock_irqrestore(&ep->wq.lock, flags);
 
 	/* We have to call this outside the lock */
 	if (pwake)
@@ -1480,7 +1478,7 @@ static int ep_insert(struct eventpoll *ep, struct epoll_event *event,
 		goto error_remove_epi;
 
 	/* We have to drop the new item inside our item list to keep track of it */
-	spin_lock_irqsave(&ep->lock, flags);
+	spin_lock_irqsave(&ep->wq.lock, flags);
 
 	/* record NAPI ID of new item if present */
 	ep_set_busy_poll_napi_id(epi);
@@ -1497,7 +1495,7 @@ static int ep_insert(struct eventpoll *ep, struct epoll_event *event,
 			pwake++;
 	}
 
-	spin_unlock_irqrestore(&ep->lock, flags);
+	spin_unlock_irqrestore(&ep->wq.lock, flags);
 
 	atomic_long_inc(&ep->user->epoll_watches);
 
@@ -1523,10 +1521,10 @@ static int ep_insert(struct eventpoll *ep, struct epoll_event *event,
 	 * list, since that is used/cleaned only inside a section bound by "mtx".
 	 * And ep_insert() is called with "mtx" held.
 	 */
-	spin_lock_irqsave(&ep->lock, flags);
+	spin_lock_irqsave(&ep->wq.lock, flags);
 	if (ep_is_linked(&epi->rdllink))
 		list_del_init(&epi->rdllink);
-	spin_unlock_irqrestore(&ep->lock, flags);
+	spin_unlock_irqrestore(&ep->wq.lock, flags);
 
 	wakeup_source_unregister(ep_wakeup_source(epi));
 
@@ -1593,7 +1591,7 @@ static int ep_modify(struct eventpoll *ep, struct epitem *epi, struct epoll_even
 	 * list, push it inside.
 	 */
 	if (revents & event->events) {
-		spin_lock_irq(&ep->lock);
+		spin_lock_irq(&ep->wq.lock);
 		if (!ep_is_linked(&epi->rdllink)) {
 			list_add_tail(&epi->rdllink, &ep->rdllist);
 			ep_pm_stay_awake(epi);
@@ -1604,7 +1602,7 @@ static int ep_modify(struct eventpoll *ep, struct epitem *epi, struct epoll_even
 			if (waitqueue_active(&ep->poll_wait))
 				pwake++;
 		}
-		spin_unlock_irq(&ep->lock);
+		spin_unlock_irq(&ep->wq.lock);
 	}
 
 	/* We have to call this outside the lock */
@@ -1754,7 +1752,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 		 * caller specified a non blocking operation.
 		 */
 		timed_out = 1;
-		spin_lock_irqsave(&ep->lock, flags);
+		spin_lock_irqsave(&ep->wq.lock, flags);
 		goto check_events;
 	}
 
@@ -1763,7 +1761,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 	if (!ep_events_available(ep))
 		ep_busy_loop(ep, timed_out);
 
-	spin_lock_irqsave(&ep->lock, flags);
+	spin_lock_irqsave(&ep->wq.lock, flags);
 
 	if (!ep_events_available(ep)) {
 		/*
@@ -1805,11 +1803,11 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 				break;
 			}
 
-			spin_unlock_irqrestore(&ep->lock, flags);
+			spin_unlock_irqrestore(&ep->wq.lock, flags);
 			if (!schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS))
 				timed_out = 1;
 
-			spin_lock_irqsave(&ep->lock, flags);
+			spin_lock_irqsave(&ep->wq.lock, flags);
 		}
 
 		__remove_wait_queue(&ep->wq, &wait);
@@ -1819,7 +1817,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 	/* Is it worth to try to dig for events ? */
 	eavail = ep_events_available(ep);
 
-	spin_unlock_irqrestore(&ep->lock, flags);
+	spin_unlock_irqrestore(&ep->wq.lock, flags);
 
 	/*
 	 * Try to transfer events to user space. In case we get 0 events and

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-12-01 23:03                 ` Christoph Hellwig
@ 2017-12-05 15:24                   ` Jason Baron
  2017-12-05 15:36                     ` Davidlohr Bueso
  2017-12-06 23:51                     ` Christoph Hellwig
  0 siblings, 2 replies; 16+ messages in thread
From: Jason Baron @ 2017-12-05 15:24 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Ingo Molnar, Peter Zijlstra, Al Viro,
	linux-fsdevel, linux-kernel, Davidlohr Bueso

On 12/01/2017 06:03 PM, Christoph Hellwig wrote:
> On Fri, Dec 01, 2017 at 05:34:50PM -0500, Jason Baron wrote:
>> hmmm...I'm not sure how this suggestion would change the locking rules
>> from what we currently have. Right now, we use ep->lock, if we remove
>> that and use ep->wq->lock instead, there is just a 1-to-1 mapping there
>> that has not changed, since ep->wq->lock currently is completely not
>> being used.
> 
> True.  The patch below survives the amazing complex booting and starting
> systemd with lockdep enabled test.  Do we have something resembling a
> epoll test suite?
>

I don't think we have any in the kernel tree proper (other than some
selftests using epoll) but there are tests in ltp and some performance
tests such as:

http://linux-scalability.org/epoll/epoll-test.c
http://www.xmailserver.org/linux-patches/pipetest.c

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-12-05 15:24                   ` Jason Baron
@ 2017-12-05 15:36                     ` Davidlohr Bueso
  2017-12-06 23:51                     ` Christoph Hellwig
  1 sibling, 0 replies; 16+ messages in thread
From: Davidlohr Bueso @ 2017-12-05 15:36 UTC (permalink / raw)
  To: Jason Baron
  Cc: Christoph Hellwig, Andrew Morton, Ingo Molnar, Peter Zijlstra,
	Al Viro, linux-fsdevel, linux-kernel

On Tue, 05 Dec 2017, Jason Baron wrote:

>On 12/01/2017 06:03 PM, Christoph Hellwig wrote:

>> True.  The patch below survives the amazing complex booting and starting
>> systemd with lockdep enabled test.  Do we have something resembling a
>> epoll test suite?
>>
>
>I don't think we have any in the kernel tree proper (other than some
>selftests using epoll) but there are tests in ltp and some performance
>tests such as:
>
>http://linux-scalability.org/epoll/epoll-test.c
>http://www.xmailserver.org/linux-patches/pipetest.c

fyi I'm working on adding epoll to perf-bench, hope to have patches soon.

Thanks,
Davidlohr

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: waitqueue lockdep annotation
  2017-12-05 15:24                   ` Jason Baron
  2017-12-05 15:36                     ` Davidlohr Bueso
@ 2017-12-06 23:51                     ` Christoph Hellwig
  1 sibling, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2017-12-06 23:51 UTC (permalink / raw)
  To: Jason Baron
  Cc: Christoph Hellwig, Andrew Morton, Ingo Molnar, Peter Zijlstra,
	Al Viro, linux-fsdevel, linux-kernel, Davidlohr Bueso

On Tue, Dec 05, 2017 at 10:24:34AM -0500, Jason Baron wrote:
> On 12/01/2017 06:03 PM, Christoph Hellwig wrote:
> > On Fri, Dec 01, 2017 at 05:34:50PM -0500, Jason Baron wrote:
> >> hmmm...I'm not sure how this suggestion would change the locking rules
> >> from what we currently have. Right now, we use ep->lock, if we remove
> >> that and use ep->wq->lock instead, there is just a 1-to-1 mapping there
> >> that has not changed, since ep->wq->lock currently is completely not
> >> being used.
> > 
> > True.  The patch below survives the amazing complex booting and starting
> > systemd with lockdep enabled test.  Do we have something resembling a
> > epoll test suite?
> >
> 
> I don't think we have any in the kernel tree proper (other than some
> selftests using epoll) but there are tests in ltp and some performance
> tests such as:
> 
> http://linux-scalability.org/epoll/epoll-test.c

That one just seems to keep running until interrupted.  I've run
it for a while and it seems fine, but I doesn't seem to get any
ok/failed kind of status.

> http://www.xmailserver.org/linux-patches/pipetest.c

Seems to work fine as well, so I'm going to resend the updated patch.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* waitqueue lockdep annotation
@ 2017-12-06 23:52 Christoph Hellwig
  0 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2017-12-06 23:52 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: Andrew Morton, Al Viro, Jason Baron, linux-fsdevel, linux-kernel

Hi all,

this series adds a strategic lockdep_assert_held to __wake_up_common
to ensure callers really do hold the wait_queue_head lock when calling
the unlocked wake_up variants.  It turns out epoll did not do this
for a fairly common path (hit all the time by systemd during bootup),
so the second patch fixed this instance as well.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-12-06 23:52 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-30 14:20 waitqueue lockdep annotation Christoph Hellwig
2017-11-30 14:20 ` [PATCH 1/2] sched/wait: assert the wait_queue_head lock is held in __wake_up_common Christoph Hellwig
2017-11-30 14:20 ` [PATCH 2/2] epoll: use proper wake_up variant in ep_poll_callback Christoph Hellwig
2017-11-30 20:50 ` waitqueue lockdep annotation Andrew Morton
2017-11-30 21:38   ` Jason Baron
2017-11-30 22:11     ` Christoph Hellwig
2017-11-30 22:18       ` Jason Baron
2017-12-01 17:11         ` Christoph Hellwig
2017-12-01 19:00           ` Jason Baron
2017-12-01 22:02             ` Christoph Hellwig
2017-12-01 22:34               ` Jason Baron
2017-12-01 23:03                 ` Christoph Hellwig
2017-12-05 15:24                   ` Jason Baron
2017-12-05 15:36                     ` Davidlohr Bueso
2017-12-06 23:51                     ` Christoph Hellwig
2017-12-06 23:52 Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.