linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6 v2] fsnotify: Fix list corruption for permission events
@ 2016-09-13 20:15 Jan Kara
  2016-09-13 20:15 ` [PATCH 1/6] fsnotify: Add a way to stop queueing events on group shutdown Jan Kara
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Jan Kara @ 2016-09-13 20:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Lino Sanfilippo, Eric Paris,
	Al Viro, Jan Kara

Hi,

this is a second version of the series to fix the list corruption for
fanotify permission events Miklos has reported. Since V1 I have addressed
Miklos' and Lino's comments added some tags and also added patches fixing
the false WARN_ON Miklos has spotted during his review.

I think the first two patches are good to go - Andrew, can you please merge
them. For the other four (fixing the theoretical barrier issue), I'd prefer
to get some review.

								Honza

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/6] fsnotify: Add a way to stop queueing events on group shutdown
  2016-09-13 20:15 [PATCH 0/6 v2] fsnotify: Fix list corruption for permission events Jan Kara
@ 2016-09-13 20:15 ` Jan Kara
  2016-09-13 20:15 ` [PATCH 2/6] fanotify: Fix list corruption in fanotify_get_response() Jan Kara
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jan Kara @ 2016-09-13 20:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Lino Sanfilippo, Eric Paris,
	Al Viro, Jan Kara, stable

Implement a function that can be called when a group is being shutdown
to stop queueing new events to the group. Fanotify will use this.

Fixes: 5838d4442bd5971687b72221736222637e03140d
CC: stable@vger.kernel.org
Reviewed-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/notify/group.c                | 19 +++++++++++++++++++
 fs/notify/notification.c         |  8 +++++++-
 include/linux/fsnotify_backend.h |  3 +++
 3 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/fs/notify/group.c b/fs/notify/group.c
index 3e2dd85be5dd..b47f7cfdcaa4 100644
--- a/fs/notify/group.c
+++ b/fs/notify/group.c
@@ -40,6 +40,17 @@ static void fsnotify_final_destroy_group(struct fsnotify_group *group)
 }
 
 /*
+ * Stop queueing new events for this group. Once this function returns
+ * fsnotify_add_event() will not add any new events to the group's queue.
+ */
+void fsnotify_group_stop_queueing(struct fsnotify_group *group)
+{
+	mutex_lock(&group->notification_mutex);
+	group->shutdown = true;
+	mutex_unlock(&group->notification_mutex);
+}
+
+/*
  * Trying to get rid of a group. Remove all marks, flush all events and release
  * the group reference.
  * Note that another thread calling fsnotify_clear_marks_by_group() may still
@@ -47,6 +58,14 @@ static void fsnotify_final_destroy_group(struct fsnotify_group *group)
  */
 void fsnotify_destroy_group(struct fsnotify_group *group)
 {
+	/*
+	 * Stop queueing new events. The code below is careful enough to not
+	 * require this but fanotify needs to stop queuing events even before
+	 * fsnotify_destroy_group() is called and this makes the other callers
+	 * of fsnotify_destroy_group() to see the same behavior.
+	 */
+	fsnotify_group_stop_queueing(group);
+
 	/* clear all inode marks for this group, attach them to destroy_list */
 	fsnotify_detach_group_marks(group);
 
diff --git a/fs/notify/notification.c b/fs/notify/notification.c
index a95d8e037aeb..3d76e65ff84f 100644
--- a/fs/notify/notification.c
+++ b/fs/notify/notification.c
@@ -82,7 +82,8 @@ void fsnotify_destroy_event(struct fsnotify_group *group,
  * Add an event to the group notification queue.  The group can later pull this
  * event off the queue to deal with.  The function returns 0 if the event was
  * added to the queue, 1 if the event was merged with some other queued event,
- * 2 if the queue of events has overflown.
+ * 2 if the event was not queued - either the queue of events has overflown
+ * or the group is shutting down.
  */
 int fsnotify_add_event(struct fsnotify_group *group,
 		       struct fsnotify_event *event,
@@ -96,6 +97,11 @@ int fsnotify_add_event(struct fsnotify_group *group,
 
 	mutex_lock(&group->notification_mutex);
 
+	if (group->shutdown) {
+		mutex_unlock(&group->notification_mutex);
+		return 2;
+	}
+
 	if (group->q_len >= group->max_events) {
 		ret = 2;
 		/* Queue overflow event only if it isn't already queued */
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index 58205f33af02..40a9e99de703 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -148,6 +148,7 @@ struct fsnotify_group {
 	#define FS_PRIO_1	1 /* fanotify content based access control */
 	#define FS_PRIO_2	2 /* fanotify pre-content access */
 	unsigned int priority;
+	bool shutdown;		/* group is being shut down, don't queue more events */
 
 	/* stores all fastpath marks assoc with this group so they can be cleaned on unregister */
 	struct mutex mark_mutex;	/* protect marks_list */
@@ -292,6 +293,8 @@ extern struct fsnotify_group *fsnotify_alloc_group(const struct fsnotify_ops *op
 extern void fsnotify_get_group(struct fsnotify_group *group);
 /* drop reference on a group from fsnotify_alloc_group */
 extern void fsnotify_put_group(struct fsnotify_group *group);
+/* group destruction begins, stop queuing new events */
+extern void fsnotify_group_stop_queueing(struct fsnotify_group *group);
 /* destroy group */
 extern void fsnotify_destroy_group(struct fsnotify_group *group);
 /* fasync handler function */
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/6] fanotify: Fix list corruption in fanotify_get_response()
  2016-09-13 20:15 [PATCH 0/6 v2] fsnotify: Fix list corruption for permission events Jan Kara
  2016-09-13 20:15 ` [PATCH 1/6] fsnotify: Add a way to stop queueing events on group shutdown Jan Kara
@ 2016-09-13 20:15 ` Jan Kara
  2016-09-13 20:15 ` [PATCH 3/6] fsnotify: Drop notification_mutex before destroying event Jan Kara
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jan Kara @ 2016-09-13 20:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Lino Sanfilippo, Eric Paris,
	Al Viro, Jan Kara, stable

fanotify_get_response() calls fsnotify_remove_event() when it finds that
group is being released from fanotify_release() (bypass_perm is set).
However the event it removes need not be only in the group's notification
queue but it can have already moved to access_list (userspace read the
event before closing the fanotify instance fd) which is protected by a
different lock. Thus when fsnotify_remove_event() races with
fanotify_release() operating on access_list, the list can get corrupted.

Fix the problem by moving all the logic removing permission events from
the lists to one place - fanotify_release().

Fixes: 5838d4442bd5971687b72221736222637e03140d
CC: stable@vger.kernel.org
Reported-and-tested-by: Miklos Szeredi <mszeredi@redhat.com>
Reviewed-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/notify/fanotify/fanotify.c      | 13 +------------
 fs/notify/fanotify/fanotify_user.c | 36 ++++++++++++++++++++++++------------
 fs/notify/notification.c           | 15 ---------------
 include/linux/fsnotify_backend.h   |  3 ---
 4 files changed, 25 insertions(+), 42 deletions(-)

diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
index d2f97ecca6a5..e0e5f7c3c99f 100644
--- a/fs/notify/fanotify/fanotify.c
+++ b/fs/notify/fanotify/fanotify.c
@@ -67,18 +67,7 @@ static int fanotify_get_response(struct fsnotify_group *group,
 
 	pr_debug("%s: group=%p event=%p\n", __func__, group, event);
 
-	wait_event(group->fanotify_data.access_waitq, event->response ||
-				atomic_read(&group->fanotify_data.bypass_perm));
-
-	if (!event->response) {	/* bypass_perm set */
-		/*
-		 * Event was canceled because group is being destroyed. Remove
-		 * it from group's event list because we are responsible for
-		 * freeing the permission event.
-		 */
-		fsnotify_remove_event(group, &event->fae.fse);
-		return 0;
-	}
+	wait_event(group->fanotify_data.access_waitq, event->response);
 
 	/* userspace responded, convert to something usable */
 	switch (event->response) {
diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
index 8e8e6bcd1d43..a64313868d3a 100644
--- a/fs/notify/fanotify/fanotify_user.c
+++ b/fs/notify/fanotify/fanotify_user.c
@@ -358,16 +358,20 @@ static int fanotify_release(struct inode *ignored, struct file *file)
 
 #ifdef CONFIG_FANOTIFY_ACCESS_PERMISSIONS
 	struct fanotify_perm_event_info *event, *next;
+	struct fsnotify_event *fsn_event;
 
 	/*
-	 * There may be still new events arriving in the notification queue
-	 * but since userspace cannot use fanotify fd anymore, no event can
-	 * enter or leave access_list by now.
+	 * Stop new events from arriving in the notification queue. since
+	 * userspace cannot use fanotify fd anymore, no event can enter or
+	 * leave access_list by now either.
 	 */
-	spin_lock(&group->fanotify_data.access_lock);
-
-	atomic_inc(&group->fanotify_data.bypass_perm);
+	fsnotify_group_stop_queueing(group);
 
+	/*
+	 * Process all permission events on access_list and notification queue
+	 * and simulate reply from userspace.
+	 */
+	spin_lock(&group->fanotify_data.access_lock);
 	list_for_each_entry_safe(event, next, &group->fanotify_data.access_list,
 				 fae.fse.list) {
 		pr_debug("%s: found group=%p event=%p\n", __func__, group,
@@ -379,12 +383,21 @@ static int fanotify_release(struct inode *ignored, struct file *file)
 	spin_unlock(&group->fanotify_data.access_lock);
 
 	/*
-	 * Since bypass_perm is set, newly queued events will not wait for
-	 * access response. Wake up the already sleeping ones now.
-	 * synchronize_srcu() in fsnotify_destroy_group() will wait for all
-	 * processes sleeping in fanotify_handle_event() waiting for access
-	 * response and thus also for all permission events to be freed.
+	 * Destroy all non-permission events. For permission events just
+	 * dequeue them and set the response. They will be freed once the
+	 * response is consumed and fanotify_get_response() returns.
 	 */
+	mutex_lock(&group->notification_mutex);
+	while (!fsnotify_notify_queue_is_empty(group)) {
+		fsn_event = fsnotify_remove_first_event(group);
+		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS))
+			fsnotify_destroy_event(group, fsn_event);
+		else
+			FANOTIFY_PE(fsn_event)->response = FAN_ALLOW;
+	}
+	mutex_unlock(&group->notification_mutex);
+
+	/* Response for all permission events it set, wakeup waiters */
 	wake_up(&group->fanotify_data.access_waitq);
 #endif
 
@@ -755,7 +768,6 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
 	spin_lock_init(&group->fanotify_data.access_lock);
 	init_waitqueue_head(&group->fanotify_data.access_waitq);
 	INIT_LIST_HEAD(&group->fanotify_data.access_list);
-	atomic_set(&group->fanotify_data.bypass_perm, 0);
 #endif
 	switch (flags & FAN_ALL_CLASS_BITS) {
 	case FAN_CLASS_NOTIF:
diff --git a/fs/notify/notification.c b/fs/notify/notification.c
index 3d76e65ff84f..e455e83ceeeb 100644
--- a/fs/notify/notification.c
+++ b/fs/notify/notification.c
@@ -132,21 +132,6 @@ queue:
 }
 
 /*
- * Remove @event from group's notification queue. It is the responsibility of
- * the caller to destroy the event.
- */
-void fsnotify_remove_event(struct fsnotify_group *group,
-			   struct fsnotify_event *event)
-{
-	mutex_lock(&group->notification_mutex);
-	if (!list_empty(&event->list)) {
-		list_del_init(&event->list);
-		group->q_len--;
-	}
-	mutex_unlock(&group->notification_mutex);
-}
-
-/*
  * Remove and return the first event from the notification list.  It is the
  * responsibility of the caller to destroy the obtained event
  */
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index 40a9e99de703..7268ed076be8 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -180,7 +180,6 @@ struct fsnotify_group {
 			spinlock_t access_lock;
 			struct list_head access_list;
 			wait_queue_head_t access_waitq;
-			atomic_t bypass_perm;
 #endif /* CONFIG_FANOTIFY_ACCESS_PERMISSIONS */
 			int f_flags;
 			unsigned int max_marks;
@@ -307,8 +306,6 @@ extern int fsnotify_add_event(struct fsnotify_group *group,
 			      struct fsnotify_event *event,
 			      int (*merge)(struct list_head *,
 					   struct fsnotify_event *));
-/* Remove passed event from groups notification queue */
-extern void fsnotify_remove_event(struct fsnotify_group *group, struct fsnotify_event *event);
 /* true if the group notification queue is empty */
 extern bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group);
 /* return, but do not dequeue the first event on the notification queue */
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/6] fsnotify: Drop notification_mutex before destroying event
  2016-09-13 20:15 [PATCH 0/6 v2] fsnotify: Fix list corruption for permission events Jan Kara
  2016-09-13 20:15 ` [PATCH 1/6] fsnotify: Add a way to stop queueing events on group shutdown Jan Kara
  2016-09-13 20:15 ` [PATCH 2/6] fanotify: Fix list corruption in fanotify_get_response() Jan Kara
@ 2016-09-13 20:15 ` Jan Kara
  2016-09-14 17:11   ` Lino Sanfilippo
  2016-09-13 20:15 ` [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock Jan Kara
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Jan Kara @ 2016-09-13 20:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Lino Sanfilippo, Eric Paris,
	Al Viro, Jan Kara

fsnotify_flush_notify() and fanotify_release() destroy notification
event while holding notification_mutex. Destruction of fanotify event
includes a path_put() call which may end up calling into a filesystem to
delete an inode if we happen to be the last holders of dentry reference
which happens to be the last holder of inode reference. That may violate
lock ordering for some filesystems since notification_mutex is also
acquired e. g. during write when generating fanotify event. Also this is
the only thing that forces notification_mutex to be a sleeping lock.  So
drop notification_mutex before destroying a notification event.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/notify/fanotify/fanotify_user.c | 6 ++++--
 fs/notify/notification.c           | 2 ++
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
index a64313868d3a..46d135c4988f 100644
--- a/fs/notify/fanotify/fanotify_user.c
+++ b/fs/notify/fanotify/fanotify_user.c
@@ -390,9 +390,11 @@ static int fanotify_release(struct inode *ignored, struct file *file)
 	mutex_lock(&group->notification_mutex);
 	while (!fsnotify_notify_queue_is_empty(group)) {
 		fsn_event = fsnotify_remove_first_event(group);
-		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS))
+		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS)) {
+			mutex_unlock(&group->notification_mutex);
 			fsnotify_destroy_event(group, fsn_event);
-		else
+			mutex_lock(&group->notification_mutex);
+		} else
 			FANOTIFY_PE(fsn_event)->response = FAN_ALLOW;
 	}
 	mutex_unlock(&group->notification_mutex);
diff --git a/fs/notify/notification.c b/fs/notify/notification.c
index e455e83ceeeb..7d563dea52a4 100644
--- a/fs/notify/notification.c
+++ b/fs/notify/notification.c
@@ -178,7 +178,9 @@ void fsnotify_flush_notify(struct fsnotify_group *group)
 	mutex_lock(&group->notification_mutex);
 	while (!fsnotify_notify_queue_is_empty(group)) {
 		event = fsnotify_remove_first_event(group);
+		mutex_unlock(&group->notification_mutex);
 		fsnotify_destroy_event(group, event);
+		mutex_lock(&group->notification_mutex);
 	}
 	mutex_unlock(&group->notification_mutex);
 }
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock
  2016-09-13 20:15 [PATCH 0/6 v2] fsnotify: Fix list corruption for permission events Jan Kara
                   ` (2 preceding siblings ...)
  2016-09-13 20:15 ` [PATCH 3/6] fsnotify: Drop notification_mutex before destroying event Jan Kara
@ 2016-09-13 20:15 ` Jan Kara
  2016-09-14 17:13   ` Lino Sanfilippo
  2016-09-13 20:15 ` [PATCH 5/6] fanotify: Use notification_lock instead of access_lock Jan Kara
  2016-09-13 20:15 ` [PATCH 6/6] fanotify: Fix possible false warning when freeing events Jan Kara
  5 siblings, 1 reply; 13+ messages in thread
From: Jan Kara @ 2016-09-13 20:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Lino Sanfilippo, Eric Paris,
	Al Viro, Jan Kara

notification_mutex is used to protect the list of pending events. As
such there's no reason to use a sleeping lock for it. Convert it to a
spinlock.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/notify/fanotify/fanotify_user.c | 26 +++++++++++++-------------
 fs/notify/group.c                  |  6 +++---
 fs/notify/inotify/inotify_user.c   | 16 ++++++++--------
 fs/notify/notification.c           | 24 ++++++++++++------------
 include/linux/fsnotify_backend.h   |  2 +-
 5 files changed, 37 insertions(+), 37 deletions(-)

diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
index 46d135c4988f..e6f3fe9bb2ed 100644
--- a/fs/notify/fanotify/fanotify_user.c
+++ b/fs/notify/fanotify/fanotify_user.c
@@ -49,12 +49,12 @@ struct kmem_cache *fanotify_perm_event_cachep __read_mostly;
  * enough to fit in "count". Return an error pointer if the count
  * is not large enough.
  *
- * Called with the group->notification_mutex held.
+ * Called with the group->notification_lock held.
  */
 static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
 					    size_t count)
 {
-	BUG_ON(!mutex_is_locked(&group->notification_mutex));
+	BUG_ON(!spin_is_locked(&group->notification_lock));
 
 	pr_debug("%s: group=%p count=%zd\n", __func__, group, count);
 
@@ -64,7 +64,7 @@ static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
 	if (FAN_EVENT_METADATA_LEN > count)
 		return ERR_PTR(-EINVAL);
 
-	/* held the notification_mutex the whole time, so this is the
+	/* held the notification_lock the whole time, so this is the
 	 * same event we peeked above */
 	return fsnotify_remove_first_event(group);
 }
@@ -244,10 +244,10 @@ static unsigned int fanotify_poll(struct file *file, poll_table *wait)
 	int ret = 0;
 
 	poll_wait(file, &group->notification_waitq, wait);
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	if (!fsnotify_notify_queue_is_empty(group))
 		ret = POLLIN | POLLRDNORM;
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 
 	return ret;
 }
@@ -268,9 +268,9 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
 
 	add_wait_queue(&group->notification_waitq, &wait);
 	while (1) {
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 		kevent = get_one_event(group, count);
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 
 		if (IS_ERR(kevent)) {
 			ret = PTR_ERR(kevent);
@@ -387,17 +387,17 @@ static int fanotify_release(struct inode *ignored, struct file *file)
 	 * dequeue them and set the response. They will be freed once the
 	 * response is consumed and fanotify_get_response() returns.
 	 */
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	while (!fsnotify_notify_queue_is_empty(group)) {
 		fsn_event = fsnotify_remove_first_event(group);
 		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS)) {
-			mutex_unlock(&group->notification_mutex);
+			spin_unlock(&group->notification_lock);
 			fsnotify_destroy_event(group, fsn_event);
-			mutex_lock(&group->notification_mutex);
+			spin_lock(&group->notification_lock);
 		} else
 			FANOTIFY_PE(fsn_event)->response = FAN_ALLOW;
 	}
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 
 	/* Response for all permission events it set, wakeup waiters */
 	wake_up(&group->fanotify_data.access_waitq);
@@ -423,10 +423,10 @@ static long fanotify_ioctl(struct file *file, unsigned int cmd, unsigned long ar
 
 	switch (cmd) {
 	case FIONREAD:
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 		list_for_each_entry(fsn_event, &group->notification_list, list)
 			send_len += FAN_EVENT_METADATA_LEN;
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 		ret = put_user(send_len, (int __user *) p);
 		break;
 	}
diff --git a/fs/notify/group.c b/fs/notify/group.c
index b47f7cfdcaa4..fbe3cbebec16 100644
--- a/fs/notify/group.c
+++ b/fs/notify/group.c
@@ -45,9 +45,9 @@ static void fsnotify_final_destroy_group(struct fsnotify_group *group)
  */
 void fsnotify_group_stop_queueing(struct fsnotify_group *group)
 {
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	group->shutdown = true;
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 }
 
 /*
@@ -125,7 +125,7 @@ struct fsnotify_group *fsnotify_alloc_group(const struct fsnotify_ops *ops)
 	atomic_set(&group->refcnt, 1);
 	atomic_set(&group->num_marks, 0);
 
-	mutex_init(&group->notification_mutex);
+	spin_lock_init(&group->notification_lock);
 	INIT_LIST_HEAD(&group->notification_list);
 	init_waitqueue_head(&group->notification_waitq);
 	group->max_events = UINT_MAX;
diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
index b8d08d0d0a4d..69d1ea3d292a 100644
--- a/fs/notify/inotify/inotify_user.c
+++ b/fs/notify/inotify/inotify_user.c
@@ -115,10 +115,10 @@ static unsigned int inotify_poll(struct file *file, poll_table *wait)
 	int ret = 0;
 
 	poll_wait(file, &group->notification_waitq, wait);
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	if (!fsnotify_notify_queue_is_empty(group))
 		ret = POLLIN | POLLRDNORM;
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 
 	return ret;
 }
@@ -138,7 +138,7 @@ static int round_event_name_len(struct fsnotify_event *fsn_event)
  * enough to fit in "count". Return an error pointer if
  * not large enough.
  *
- * Called with the group->notification_mutex held.
+ * Called with the group->notification_lock held.
  */
 static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
 					    size_t count)
@@ -157,7 +157,7 @@ static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
 	if (event_size > count)
 		return ERR_PTR(-EINVAL);
 
-	/* held the notification_mutex the whole time, so this is the
+	/* held the notification_lock the whole time, so this is the
 	 * same event we peeked above */
 	fsnotify_remove_first_event(group);
 
@@ -234,9 +234,9 @@ static ssize_t inotify_read(struct file *file, char __user *buf,
 
 	add_wait_queue(&group->notification_waitq, &wait);
 	while (1) {
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 		kevent = get_one_event(group, count);
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 
 		pr_debug("%s: group=%p kevent=%p\n", __func__, group, kevent);
 
@@ -300,13 +300,13 @@ static long inotify_ioctl(struct file *file, unsigned int cmd,
 
 	switch (cmd) {
 	case FIONREAD:
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 		list_for_each_entry(fsn_event, &group->notification_list,
 				    list) {
 			send_len += sizeof(struct inotify_event);
 			send_len += round_event_name_len(fsn_event);
 		}
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 		ret = put_user(send_len, (int __user *) p);
 		break;
 	}
diff --git a/fs/notify/notification.c b/fs/notify/notification.c
index 7d563dea52a4..070d255b24a2 100644
--- a/fs/notify/notification.c
+++ b/fs/notify/notification.c
@@ -63,7 +63,7 @@ EXPORT_SYMBOL_GPL(fsnotify_get_cookie);
 /* return true if the notify queue is empty, false otherwise */
 bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group)
 {
-	BUG_ON(!mutex_is_locked(&group->notification_mutex));
+	BUG_ON(!spin_is_locked(&group->notification_lock));
 	return list_empty(&group->notification_list) ? true : false;
 }
 
@@ -95,10 +95,10 @@ int fsnotify_add_event(struct fsnotify_group *group,
 
 	pr_debug("%s: group=%p event=%p\n", __func__, group, event);
 
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 
 	if (group->shutdown) {
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 		return 2;
 	}
 
@@ -106,7 +106,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
 		ret = 2;
 		/* Queue overflow event only if it isn't already queued */
 		if (!list_empty(&group->overflow_event->list)) {
-			mutex_unlock(&group->notification_mutex);
+			spin_unlock(&group->notification_lock);
 			return ret;
 		}
 		event = group->overflow_event;
@@ -116,7 +116,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
 	if (!list_empty(list) && merge) {
 		ret = merge(list, event);
 		if (ret) {
-			mutex_unlock(&group->notification_mutex);
+			spin_unlock(&group->notification_lock);
 			return ret;
 		}
 	}
@@ -124,7 +124,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
 queue:
 	group->q_len++;
 	list_add_tail(&event->list, list);
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 
 	wake_up(&group->notification_waitq);
 	kill_fasync(&group->fsn_fa, SIGIO, POLL_IN);
@@ -139,7 +139,7 @@ struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
 {
 	struct fsnotify_event *event;
 
-	BUG_ON(!mutex_is_locked(&group->notification_mutex));
+	BUG_ON(!spin_is_locked(&group->notification_lock));
 
 	pr_debug("%s: group=%p\n", __func__, group);
 
@@ -161,7 +161,7 @@ struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
  */
 struct fsnotify_event *fsnotify_peek_first_event(struct fsnotify_group *group)
 {
-	BUG_ON(!mutex_is_locked(&group->notification_mutex));
+	BUG_ON(!spin_is_locked(&group->notification_lock));
 
 	return list_first_entry(&group->notification_list,
 				struct fsnotify_event, list);
@@ -175,14 +175,14 @@ void fsnotify_flush_notify(struct fsnotify_group *group)
 {
 	struct fsnotify_event *event;
 
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	while (!fsnotify_notify_queue_is_empty(group)) {
 		event = fsnotify_remove_first_event(group);
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 		fsnotify_destroy_event(group, event);
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 	}
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 }
 
 /*
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index 7268ed076be8..0713e873b1c9 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -135,7 +135,7 @@ struct fsnotify_group {
 	const struct fsnotify_ops *ops;	/* how this group handles things */
 
 	/* needed to send notification to userspace */
-	struct mutex notification_mutex;	/* protect the notification_list */
+	spinlock_t notification_lock;		/* protect the notification_list */
 	struct list_head notification_list;	/* list of event_holder this group needs to send to userspace */
 	wait_queue_head_t notification_waitq;	/* read() on the notification file blocks on this waitq */
 	unsigned int q_len;			/* events on the queue */
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/6] fanotify: Use notification_lock instead of access_lock
  2016-09-13 20:15 [PATCH 0/6 v2] fsnotify: Fix list corruption for permission events Jan Kara
                   ` (3 preceding siblings ...)
  2016-09-13 20:15 ` [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock Jan Kara
@ 2016-09-13 20:15 ` Jan Kara
  2016-09-14 17:14   ` Lino Sanfilippo
  2016-09-13 20:15 ` [PATCH 6/6] fanotify: Fix possible false warning when freeing events Jan Kara
  5 siblings, 1 reply; 13+ messages in thread
From: Jan Kara @ 2016-09-13 20:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Lino Sanfilippo, Eric Paris,
	Al Viro, Jan Kara

Fanotify code has an own lock (access_lock) to protect a list of events
waiting for a response from userspace. However this is somewhat awkward
as the same list_head in the event is protected by notification_lock
if it is part of the notification queue and by access_lock if it is part
of the fanotify private queue which makes it difficult for any reliable
checks in the generic code. So make fanotify use the same lock -
notification_lock - for protecting its private event list.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/notify/fanotify/fanotify_user.c | 13 +++++--------
 include/linux/fsnotify_backend.h   |  1 -
 2 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
index e6f3fe9bb2ed..aed06fe2533d 100644
--- a/fs/notify/fanotify/fanotify_user.c
+++ b/fs/notify/fanotify/fanotify_user.c
@@ -147,7 +147,7 @@ static struct fanotify_perm_event_info *dequeue_event(
 {
 	struct fanotify_perm_event_info *event, *return_e = NULL;
 
-	spin_lock(&group->fanotify_data.access_lock);
+	spin_lock(&group->notification_lock);
 	list_for_each_entry(event, &group->fanotify_data.access_list,
 			    fae.fse.list) {
 		if (event->fd != fd)
@@ -157,7 +157,7 @@ static struct fanotify_perm_event_info *dequeue_event(
 		return_e = event;
 		break;
 	}
-	spin_unlock(&group->fanotify_data.access_lock);
+	spin_unlock(&group->notification_lock);
 
 	pr_debug("%s: found return_re=%p\n", __func__, return_e);
 
@@ -309,10 +309,10 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
 				wake_up(&group->fanotify_data.access_waitq);
 				break;
 			}
-			spin_lock(&group->fanotify_data.access_lock);
+			spin_lock(&group->notification_lock);
 			list_add_tail(&kevent->list,
 				      &group->fanotify_data.access_list);
-			spin_unlock(&group->fanotify_data.access_lock);
+			spin_unlock(&group->notification_lock);
 #endif
 		}
 		buf += ret;
@@ -371,7 +371,7 @@ static int fanotify_release(struct inode *ignored, struct file *file)
 	 * Process all permission events on access_list and notification queue
 	 * and simulate reply from userspace.
 	 */
-	spin_lock(&group->fanotify_data.access_lock);
+	spin_lock(&group->notification_lock);
 	list_for_each_entry_safe(event, next, &group->fanotify_data.access_list,
 				 fae.fse.list) {
 		pr_debug("%s: found group=%p event=%p\n", __func__, group,
@@ -380,14 +380,12 @@ static int fanotify_release(struct inode *ignored, struct file *file)
 		list_del_init(&event->fae.fse.list);
 		event->response = FAN_ALLOW;
 	}
-	spin_unlock(&group->fanotify_data.access_lock);
 
 	/*
 	 * Destroy all non-permission events. For permission events just
 	 * dequeue them and set the response. They will be freed once the
 	 * response is consumed and fanotify_get_response() returns.
 	 */
-	spin_lock(&group->notification_lock);
 	while (!fsnotify_notify_queue_is_empty(group)) {
 		fsn_event = fsnotify_remove_first_event(group);
 		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS)) {
@@ -767,7 +765,6 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
 		event_f_flags |= O_LARGEFILE;
 	group->fanotify_data.f_flags = event_f_flags;
 #ifdef CONFIG_FANOTIFY_ACCESS_PERMISSIONS
-	spin_lock_init(&group->fanotify_data.access_lock);
 	init_waitqueue_head(&group->fanotify_data.access_waitq);
 	INIT_LIST_HEAD(&group->fanotify_data.access_list);
 #endif
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index 0713e873b1c9..79467b239fcf 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -177,7 +177,6 @@ struct fsnotify_group {
 		struct fanotify_group_private_data {
 #ifdef CONFIG_FANOTIFY_ACCESS_PERMISSIONS
 			/* allows a group to block waiting for a userspace response */
-			spinlock_t access_lock;
 			struct list_head access_list;
 			wait_queue_head_t access_waitq;
 #endif /* CONFIG_FANOTIFY_ACCESS_PERMISSIONS */
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 6/6] fanotify: Fix possible false warning when freeing events
  2016-09-13 20:15 [PATCH 0/6 v2] fsnotify: Fix list corruption for permission events Jan Kara
                   ` (4 preceding siblings ...)
  2016-09-13 20:15 ` [PATCH 5/6] fanotify: Use notification_lock instead of access_lock Jan Kara
@ 2016-09-13 20:15 ` Jan Kara
  2016-09-14 17:14   ` Lino Sanfilippo
  5 siblings, 1 reply; 13+ messages in thread
From: Jan Kara @ 2016-09-13 20:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Lino Sanfilippo, Eric Paris,
	Al Viro, Jan Kara

When freeing permission events by fsnotify_destroy_event(), the warning
WARN_ON(!list_empty(&event->list));
may falsely hit. This is because although fanotify_get_response() saw
event->response set, there is nothing to make sure the current CPU also
sees the removal of the event from the list. Add proper locking around
the WARN_ON() to avoid the false warning.

Reported-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/notify/notification.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/fs/notify/notification.c b/fs/notify/notification.c
index 070d255b24a2..6b7f430bb2de 100644
--- a/fs/notify/notification.c
+++ b/fs/notify/notification.c
@@ -73,8 +73,17 @@ void fsnotify_destroy_event(struct fsnotify_group *group,
 	/* Overflow events are per-group and we don't want to free them */
 	if (!event || event->mask == FS_Q_OVERFLOW)
 		return;
-	/* If the event is still queued, we have a problem... */
-	WARN_ON(!list_empty(&event->list));
+	/*
+	 * If the event is still queued, we have a problem... Do an unreliable
+	 * lockless check first to avoid locking in the common case. The
+	 * locking may be necessary for permission events which got removed
+	 * from the list by a different CPU than the one freeing the event.
+	 */
+	if (!list_empty(&event->list)) {
+		spin_lock(&group->notification_lock);
+		WARN_ON(!list_empty(&event->list));
+		spin_unlock(&group->notification_lock);
+	}
 	group->ops->free_event(event);
 }
 
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/6] fsnotify: Drop notification_mutex before destroying event
  2016-09-13 20:15 ` [PATCH 3/6] fsnotify: Drop notification_mutex before destroying event Jan Kara
@ 2016-09-14 17:11   ` Lino Sanfilippo
  0 siblings, 0 replies; 13+ messages in thread
From: Lino Sanfilippo @ 2016-09-14 17:11 UTC (permalink / raw)
  To: Jan Kara, Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Eric Paris, Al Viro

On 13.09.2016 22:15, Jan Kara wrote:
> fsnotify_flush_notify() and fanotify_release() destroy notification
> event while holding notification_mutex. Destruction of fanotify event
> includes a path_put() call which may end up calling into a filesystem to
> delete an inode if we happen to be the last holders of dentry reference
> which happens to be the last holder of inode reference. That may violate
> lock ordering for some filesystems since notification_mutex is also
> acquired e. g. during write when generating fanotify event. Also this is
> the only thing that forces notification_mutex to be a sleeping lock.  So
> drop notification_mutex before destroying a notification event.
> 
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  fs/notify/fanotify/fanotify_user.c | 6 ++++--
>  fs/notify/notification.c           | 2 ++
>  2 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
> index a64313868d3a..46d135c4988f 100644
> --- a/fs/notify/fanotify/fanotify_user.c
> +++ b/fs/notify/fanotify/fanotify_user.c
> @@ -390,9 +390,11 @@ static int fanotify_release(struct inode *ignored, struct file *file)
>  	mutex_lock(&group->notification_mutex);
>  	while (!fsnotify_notify_queue_is_empty(group)) {
>  		fsn_event = fsnotify_remove_first_event(group);
> -		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS))
> +		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS)) {
> +			mutex_unlock(&group->notification_mutex);
>  			fsnotify_destroy_event(group, fsn_event);
> -		else
> +			mutex_lock(&group->notification_mutex);
> +		} else
>  			FANOTIFY_PE(fsn_event)->response = FAN_ALLOW;
>  	}
>  	mutex_unlock(&group->notification_mutex);
> diff --git a/fs/notify/notification.c b/fs/notify/notification.c
> index e455e83ceeeb..7d563dea52a4 100644
> --- a/fs/notify/notification.c
> +++ b/fs/notify/notification.c
> @@ -178,7 +178,9 @@ void fsnotify_flush_notify(struct fsnotify_group *group)
>  	mutex_lock(&group->notification_mutex);
>  	while (!fsnotify_notify_queue_is_empty(group)) {
>  		event = fsnotify_remove_first_event(group);
> +		mutex_unlock(&group->notification_mutex);
>  		fsnotify_destroy_event(group, event);
> +		mutex_lock(&group->notification_mutex);
>  	}
>  	mutex_unlock(&group->notification_mutex);
>  }
> 

Reviewed-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock
  2016-09-13 20:15 ` [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock Jan Kara
@ 2016-09-14 17:13   ` Lino Sanfilippo
  0 siblings, 0 replies; 13+ messages in thread
From: Lino Sanfilippo @ 2016-09-14 17:13 UTC (permalink / raw)
  To: Jan Kara, Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Eric Paris, Al Viro

On 13.09.2016 22:15, Jan Kara wrote:
> notification_mutex is used to protect the list of pending events. As
> such there's no reason to use a sleeping lock for it. Convert it to a
> spinlock.
> 
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  fs/notify/fanotify/fanotify_user.c | 26 +++++++++++++-------------
>  fs/notify/group.c                  |  6 +++---
>  fs/notify/inotify/inotify_user.c   | 16 ++++++++--------
>  fs/notify/notification.c           | 24 ++++++++++++------------
>  include/linux/fsnotify_backend.h   |  2 +-
>  5 files changed, 37 insertions(+), 37 deletions(-)
> 
> diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
> index 46d135c4988f..e6f3fe9bb2ed 100644
> --- a/fs/notify/fanotify/fanotify_user.c
> +++ b/fs/notify/fanotify/fanotify_user.c
> @@ -49,12 +49,12 @@ struct kmem_cache *fanotify_perm_event_cachep __read_mostly;
>   * enough to fit in "count". Return an error pointer if the count
>   * is not large enough.
>   *
> - * Called with the group->notification_mutex held.
> + * Called with the group->notification_lock held.
>   */
>  static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
>  					    size_t count)
>  {
> -	BUG_ON(!mutex_is_locked(&group->notification_mutex));
> +	BUG_ON(!spin_is_locked(&group->notification_lock));
>  
>  	pr_debug("%s: group=%p count=%zd\n", __func__, group, count);
>  
> @@ -64,7 +64,7 @@ static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
>  	if (FAN_EVENT_METADATA_LEN > count)
>  		return ERR_PTR(-EINVAL);
>  
> -	/* held the notification_mutex the whole time, so this is the
> +	/* held the notification_lock the whole time, so this is the
>  	 * same event we peeked above */
>  	return fsnotify_remove_first_event(group);
>  }

Just a nitpick: The comment does not reflect the current code (we dont peek) any more so
it could be removed completely.

> @@ -244,10 +244,10 @@ static unsigned int fanotify_poll(struct file *file, poll_table *wait)
>  	int ret = 0;
>  
>  	poll_wait(file, &group->notification_waitq, wait);
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	if (!fsnotify_notify_queue_is_empty(group))
>  		ret = POLLIN | POLLRDNORM;
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  
>  	return ret;
>  }
> @@ -268,9 +268,9 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
>  
>  	add_wait_queue(&group->notification_waitq, &wait);
>  	while (1) {
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  		kevent = get_one_event(group, count);
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  
>  		if (IS_ERR(kevent)) {
>  			ret = PTR_ERR(kevent);
> @@ -387,17 +387,17 @@ static int fanotify_release(struct inode *ignored, struct file *file)
>  	 * dequeue them and set the response. They will be freed once the
>  	 * response is consumed and fanotify_get_response() returns.
>  	 */
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	while (!fsnotify_notify_queue_is_empty(group)) {
>  		fsn_event = fsnotify_remove_first_event(group);
>  		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS)) {
> -			mutex_unlock(&group->notification_mutex);
> +			spin_unlock(&group->notification_lock);
>  			fsnotify_destroy_event(group, fsn_event);
> -			mutex_lock(&group->notification_mutex);
> +			spin_lock(&group->notification_lock);
>  		} else
>  			FANOTIFY_PE(fsn_event)->response = FAN_ALLOW;
>  	}
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  
>  	/* Response for all permission events it set, wakeup waiters */
>  	wake_up(&group->fanotify_data.access_waitq);
> @@ -423,10 +423,10 @@ static long fanotify_ioctl(struct file *file, unsigned int cmd, unsigned long ar
>  
>  	switch (cmd) {
>  	case FIONREAD:
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  		list_for_each_entry(fsn_event, &group->notification_list, list)
>  			send_len += FAN_EVENT_METADATA_LEN;
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  		ret = put_user(send_len, (int __user *) p);
>  		break;
>  	}
> diff --git a/fs/notify/group.c b/fs/notify/group.c
> index b47f7cfdcaa4..fbe3cbebec16 100644
> --- a/fs/notify/group.c
> +++ b/fs/notify/group.c
> @@ -45,9 +45,9 @@ static void fsnotify_final_destroy_group(struct fsnotify_group *group)
>   */
>  void fsnotify_group_stop_queueing(struct fsnotify_group *group)
>  {
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	group->shutdown = true;
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  }
>  
>  /*
> @@ -125,7 +125,7 @@ struct fsnotify_group *fsnotify_alloc_group(const struct fsnotify_ops *ops)
>  	atomic_set(&group->refcnt, 1);
>  	atomic_set(&group->num_marks, 0);
>  
> -	mutex_init(&group->notification_mutex);
> +	spin_lock_init(&group->notification_lock);
>  	INIT_LIST_HEAD(&group->notification_list);
>  	init_waitqueue_head(&group->notification_waitq);
>  	group->max_events = UINT_MAX;
> diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
> index b8d08d0d0a4d..69d1ea3d292a 100644
> --- a/fs/notify/inotify/inotify_user.c
> +++ b/fs/notify/inotify/inotify_user.c
> @@ -115,10 +115,10 @@ static unsigned int inotify_poll(struct file *file, poll_table *wait)
>  	int ret = 0;
>  
>  	poll_wait(file, &group->notification_waitq, wait);
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	if (!fsnotify_notify_queue_is_empty(group))
>  		ret = POLLIN | POLLRDNORM;
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  
>  	return ret;
>  }
> @@ -138,7 +138,7 @@ static int round_event_name_len(struct fsnotify_event *fsn_event)
>   * enough to fit in "count". Return an error pointer if
>   * not large enough.
>   *
> - * Called with the group->notification_mutex held.
> + * Called with the group->notification_lock held.
>   */
>  static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
>  					    size_t count)
> @@ -157,7 +157,7 @@ static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
>  	if (event_size > count)
>  		return ERR_PTR(-EINVAL);
>  
> -	/* held the notification_mutex the whole time, so this is the
> +	/* held the notification_lock the whole time, so this is the
>  	 * same event we peeked above */
>  	fsnotify_remove_first_event(group);
>  
> @@ -234,9 +234,9 @@ static ssize_t inotify_read(struct file *file, char __user *buf,
>  
>  	add_wait_queue(&group->notification_waitq, &wait);
>  	while (1) {
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  		kevent = get_one_event(group, count);
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  
>  		pr_debug("%s: group=%p kevent=%p\n", __func__, group, kevent);
>  
> @@ -300,13 +300,13 @@ static long inotify_ioctl(struct file *file, unsigned int cmd,
>  
>  	switch (cmd) {
>  	case FIONREAD:
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  		list_for_each_entry(fsn_event, &group->notification_list,
>  				    list) {
>  			send_len += sizeof(struct inotify_event);
>  			send_len += round_event_name_len(fsn_event);
>  		}
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  		ret = put_user(send_len, (int __user *) p);
>  		break;
>  	}
> diff --git a/fs/notify/notification.c b/fs/notify/notification.c
> index 7d563dea52a4..070d255b24a2 100644
> --- a/fs/notify/notification.c
> +++ b/fs/notify/notification.c
> @@ -63,7 +63,7 @@ EXPORT_SYMBOL_GPL(fsnotify_get_cookie);
>  /* return true if the notify queue is empty, false otherwise */
>  bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group)
>  {
> -	BUG_ON(!mutex_is_locked(&group->notification_mutex));
> +	BUG_ON(!spin_is_locked(&group->notification_lock));
>  	return list_empty(&group->notification_list) ? true : false;
>  }
>  
> @@ -95,10 +95,10 @@ int fsnotify_add_event(struct fsnotify_group *group,
>  
>  	pr_debug("%s: group=%p event=%p\n", __func__, group, event);
>  
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  
>  	if (group->shutdown) {
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  		return 2;
>  	}
>  
> @@ -106,7 +106,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
>  		ret = 2;
>  		/* Queue overflow event only if it isn't already queued */
>  		if (!list_empty(&group->overflow_event->list)) {
> -			mutex_unlock(&group->notification_mutex);
> +			spin_unlock(&group->notification_lock);
>  			return ret;
>  		}
>  		event = group->overflow_event;
> @@ -116,7 +116,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
>  	if (!list_empty(list) && merge) {
>  		ret = merge(list, event);
>  		if (ret) {
> -			mutex_unlock(&group->notification_mutex);
> +			spin_unlock(&group->notification_lock);
>  			return ret;
>  		}
>  	}
> @@ -124,7 +124,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
>  queue:
>  	group->q_len++;
>  	list_add_tail(&event->list, list);
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  
>  	wake_up(&group->notification_waitq);
>  	kill_fasync(&group->fsn_fa, SIGIO, POLL_IN);
> @@ -139,7 +139,7 @@ struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
>  {
>  	struct fsnotify_event *event;
>  
> -	BUG_ON(!mutex_is_locked(&group->notification_mutex));
> +	BUG_ON(!spin_is_locked(&group->notification_lock));
>  
>  	pr_debug("%s: group=%p\n", __func__, group);
>  
> @@ -161,7 +161,7 @@ struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
>   */
>  struct fsnotify_event *fsnotify_peek_first_event(struct fsnotify_group *group)
>  {
> -	BUG_ON(!mutex_is_locked(&group->notification_mutex));
> +	BUG_ON(!spin_is_locked(&group->notification_lock));
>  
>  	return list_first_entry(&group->notification_list,
>  				struct fsnotify_event, list);
> @@ -175,14 +175,14 @@ void fsnotify_flush_notify(struct fsnotify_group *group)
>  {
>  	struct fsnotify_event *event;
>  
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	while (!fsnotify_notify_queue_is_empty(group)) {
>  		event = fsnotify_remove_first_event(group);
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  		fsnotify_destroy_event(group, event);
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  	}
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  }
>  
>  /*
> diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
> index 7268ed076be8..0713e873b1c9 100644
> --- a/include/linux/fsnotify_backend.h
> +++ b/include/linux/fsnotify_backend.h
> @@ -135,7 +135,7 @@ struct fsnotify_group {
>  	const struct fsnotify_ops *ops;	/* how this group handles things */
>  
>  	/* needed to send notification to userspace */
> -	struct mutex notification_mutex;	/* protect the notification_list */
> +	spinlock_t notification_lock;		/* protect the notification_list */
>  	struct list_head notification_list;	/* list of event_holder this group needs to send to userspace */
>  	wait_queue_head_t notification_waitq;	/* read() on the notification file blocks on this waitq */
>  	unsigned int q_len;			/* events on the queue */
> 

Reviewed-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/6] fanotify: Use notification_lock instead of access_lock
  2016-09-13 20:15 ` [PATCH 5/6] fanotify: Use notification_lock instead of access_lock Jan Kara
@ 2016-09-14 17:14   ` Lino Sanfilippo
  0 siblings, 0 replies; 13+ messages in thread
From: Lino Sanfilippo @ 2016-09-14 17:14 UTC (permalink / raw)
  To: Jan Kara, Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Eric Paris, Al Viro

On 13.09.2016 22:15, Jan Kara wrote:
> Fanotify code has an own lock (access_lock) to protect a list of events
> waiting for a response from userspace. However this is somewhat awkward
> as the same list_head in the event is protected by notification_lock
> if it is part of the notification queue and by access_lock if it is part
> of the fanotify private queue which makes it difficult for any reliable
> checks in the generic code. So make fanotify use the same lock -
> notification_lock - for protecting its private event list.
> 
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  fs/notify/fanotify/fanotify_user.c | 13 +++++--------
>  include/linux/fsnotify_backend.h   |  1 -
>  2 files changed, 5 insertions(+), 9 deletions(-)
> 
> diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
> index e6f3fe9bb2ed..aed06fe2533d 100644
> --- a/fs/notify/fanotify/fanotify_user.c
> +++ b/fs/notify/fanotify/fanotify_user.c
> @@ -147,7 +147,7 @@ static struct fanotify_perm_event_info *dequeue_event(
>  {
>  	struct fanotify_perm_event_info *event, *return_e = NULL;
>  
> -	spin_lock(&group->fanotify_data.access_lock);
> +	spin_lock(&group->notification_lock);
>  	list_for_each_entry(event, &group->fanotify_data.access_list,
>  			    fae.fse.list) {
>  		if (event->fd != fd)
> @@ -157,7 +157,7 @@ static struct fanotify_perm_event_info *dequeue_event(
>  		return_e = event;
>  		break;
>  	}
> -	spin_unlock(&group->fanotify_data.access_lock);
> +	spin_unlock(&group->notification_lock);
>  
>  	pr_debug("%s: found return_re=%p\n", __func__, return_e);
>  
> @@ -309,10 +309,10 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
>  				wake_up(&group->fanotify_data.access_waitq);
>  				break;
>  			}
> -			spin_lock(&group->fanotify_data.access_lock);
> +			spin_lock(&group->notification_lock);
>  			list_add_tail(&kevent->list,
>  				      &group->fanotify_data.access_list);
> -			spin_unlock(&group->fanotify_data.access_lock);
> +			spin_unlock(&group->notification_lock);
>  #endif
>  		}
>  		buf += ret;
> @@ -371,7 +371,7 @@ static int fanotify_release(struct inode *ignored, struct file *file)
>  	 * Process all permission events on access_list and notification queue
>  	 * and simulate reply from userspace.
>  	 */
> -	spin_lock(&group->fanotify_data.access_lock);
> +	spin_lock(&group->notification_lock);
>  	list_for_each_entry_safe(event, next, &group->fanotify_data.access_list,
>  				 fae.fse.list) {
>  		pr_debug("%s: found group=%p event=%p\n", __func__, group,
> @@ -380,14 +380,12 @@ static int fanotify_release(struct inode *ignored, struct file *file)
>  		list_del_init(&event->fae.fse.list);
>  		event->response = FAN_ALLOW;
>  	}
> -	spin_unlock(&group->fanotify_data.access_lock);
>  
>  	/*
>  	 * Destroy all non-permission events. For permission events just
>  	 * dequeue them and set the response. They will be freed once the
>  	 * response is consumed and fanotify_get_response() returns.
>  	 */
> -	spin_lock(&group->notification_lock);
>  	while (!fsnotify_notify_queue_is_empty(group)) {
>  		fsn_event = fsnotify_remove_first_event(group);
>  		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS)) {
> @@ -767,7 +765,6 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
>  		event_f_flags |= O_LARGEFILE;
>  	group->fanotify_data.f_flags = event_f_flags;
>  #ifdef CONFIG_FANOTIFY_ACCESS_PERMISSIONS
> -	spin_lock_init(&group->fanotify_data.access_lock);
>  	init_waitqueue_head(&group->fanotify_data.access_waitq);
>  	INIT_LIST_HEAD(&group->fanotify_data.access_list);
>  #endif
> diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
> index 0713e873b1c9..79467b239fcf 100644
> --- a/include/linux/fsnotify_backend.h
> +++ b/include/linux/fsnotify_backend.h
> @@ -177,7 +177,6 @@ struct fsnotify_group {
>  		struct fanotify_group_private_data {
>  #ifdef CONFIG_FANOTIFY_ACCESS_PERMISSIONS
>  			/* allows a group to block waiting for a userspace response */
> -			spinlock_t access_lock;
>  			struct list_head access_list;
>  			wait_queue_head_t access_waitq;
>  #endif /* CONFIG_FANOTIFY_ACCESS_PERMISSIONS */
> 

Reviewed-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 6/6] fanotify: Fix possible false warning when freeing events
  2016-09-13 20:15 ` [PATCH 6/6] fanotify: Fix possible false warning when freeing events Jan Kara
@ 2016-09-14 17:14   ` Lino Sanfilippo
  0 siblings, 0 replies; 13+ messages in thread
From: Lino Sanfilippo @ 2016-09-14 17:14 UTC (permalink / raw)
  To: Jan Kara, Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Eric Paris, Al Viro

On 13.09.2016 22:15, Jan Kara wrote:
> When freeing permission events by fsnotify_destroy_event(), the warning
> WARN_ON(!list_empty(&event->list));
> may falsely hit. This is because although fanotify_get_response() saw
> event->response set, there is nothing to make sure the current CPU also
> sees the removal of the event from the list. Add proper locking around
> the WARN_ON() to avoid the false warning.
> 
> Reported-by: Miklos Szeredi <mszeredi@redhat.com>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  fs/notify/notification.c | 13 +++++++++++--
>  1 file changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/notify/notification.c b/fs/notify/notification.c
> index 070d255b24a2..6b7f430bb2de 100644
> --- a/fs/notify/notification.c
> +++ b/fs/notify/notification.c
> @@ -73,8 +73,17 @@ void fsnotify_destroy_event(struct fsnotify_group *group,
>  	/* Overflow events are per-group and we don't want to free them */
>  	if (!event || event->mask == FS_Q_OVERFLOW)
>  		return;
> -	/* If the event is still queued, we have a problem... */
> -	WARN_ON(!list_empty(&event->list));
> +	/*
> +	 * If the event is still queued, we have a problem... Do an unreliable
> +	 * lockless check first to avoid locking in the common case. The
> +	 * locking may be necessary for permission events which got removed
> +	 * from the list by a different CPU than the one freeing the event.
> +	 */
> +	if (!list_empty(&event->list)) {
> +		spin_lock(&group->notification_lock);
> +		WARN_ON(!list_empty(&event->list));
> +		spin_unlock(&group->notification_lock);
> +	}
>  	group->ops->free_event(event);
>  }
>  
> 

Reviewed-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock
  2016-09-16 13:12 [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock Jan Kara
@ 2016-09-16 15:55 ` Guenter Roeck
  0 siblings, 0 replies; 13+ messages in thread
From: Guenter Roeck @ 2016-09-16 15:55 UTC (permalink / raw)
  To: Jan Kara
  Cc: Andrew Morton, linux-fsdevel, Miklos Szeredi, Lino Sanfilippo,
	Eric Paris

On Fri, Sep 16, 2016 at 03:12:47PM +0200, Jan Kara wrote:
> notification_mutex is used to protect the list of pending events. As
> such there's no reason to use a sleeping lock for it. Convert it to a
> spinlock.
> 
> Reviewed-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
> Signed-off-by: Jan Kara <jack@suse.cz>

Tested-by: Guenter Roeck <linux@roeck-us.net>

> ---
>  fs/notify/fanotify/fanotify_user.c | 27 ++++++++++++++-------------
>  fs/notify/group.c                  |  6 +++---
>  fs/notify/inotify/inotify_user.c   | 16 ++++++++--------
>  fs/notify/notification.c           | 27 +++++++++++++++------------
>  include/linux/fsnotify_backend.h   |  2 +-
>  5 files changed, 41 insertions(+), 37 deletions(-)
> 
> This is a fixed version of the patch that fixes the BUG_ON hitting on UP
> kernels.
> 
> diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
> index 46d135c4988f..80091a5dc8c0 100644
> --- a/fs/notify/fanotify/fanotify_user.c
> +++ b/fs/notify/fanotify/fanotify_user.c
> @@ -49,12 +49,13 @@ struct kmem_cache *fanotify_perm_event_cachep __read_mostly;
>   * enough to fit in "count". Return an error pointer if the count
>   * is not large enough.
>   *
> - * Called with the group->notification_mutex held.
> + * Called with the group->notification_lock held.
>   */
>  static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
>  					    size_t count)
>  {
> -	BUG_ON(!mutex_is_locked(&group->notification_mutex));
> +	BUG_ON(IS_ENABLED(CONFIG_SMP) &&
> +	       !spin_is_locked(&group->notification_lock));
>  
>  	pr_debug("%s: group=%p count=%zd\n", __func__, group, count);
>  
> @@ -64,7 +65,7 @@ static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
>  	if (FAN_EVENT_METADATA_LEN > count)
>  		return ERR_PTR(-EINVAL);
>  
> -	/* held the notification_mutex the whole time, so this is the
> +	/* held the notification_lock the whole time, so this is the
>  	 * same event we peeked above */
>  	return fsnotify_remove_first_event(group);
>  }
> @@ -244,10 +245,10 @@ static unsigned int fanotify_poll(struct file *file, poll_table *wait)
>  	int ret = 0;
>  
>  	poll_wait(file, &group->notification_waitq, wait);
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	if (!fsnotify_notify_queue_is_empty(group))
>  		ret = POLLIN | POLLRDNORM;
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  
>  	return ret;
>  }
> @@ -268,9 +269,9 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
>  
>  	add_wait_queue(&group->notification_waitq, &wait);
>  	while (1) {
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  		kevent = get_one_event(group, count);
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  
>  		if (IS_ERR(kevent)) {
>  			ret = PTR_ERR(kevent);
> @@ -387,17 +388,17 @@ static int fanotify_release(struct inode *ignored, struct file *file)
>  	 * dequeue them and set the response. They will be freed once the
>  	 * response is consumed and fanotify_get_response() returns.
>  	 */
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	while (!fsnotify_notify_queue_is_empty(group)) {
>  		fsn_event = fsnotify_remove_first_event(group);
>  		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS)) {
> -			mutex_unlock(&group->notification_mutex);
> +			spin_unlock(&group->notification_lock);
>  			fsnotify_destroy_event(group, fsn_event);
> -			mutex_lock(&group->notification_mutex);
> +			spin_lock(&group->notification_lock);
>  		} else
>  			FANOTIFY_PE(fsn_event)->response = FAN_ALLOW;
>  	}
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  
>  	/* Response for all permission events it set, wakeup waiters */
>  	wake_up(&group->fanotify_data.access_waitq);
> @@ -423,10 +424,10 @@ static long fanotify_ioctl(struct file *file, unsigned int cmd, unsigned long ar
>  
>  	switch (cmd) {
>  	case FIONREAD:
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  		list_for_each_entry(fsn_event, &group->notification_list, list)
>  			send_len += FAN_EVENT_METADATA_LEN;
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  		ret = put_user(send_len, (int __user *) p);
>  		break;
>  	}
> diff --git a/fs/notify/group.c b/fs/notify/group.c
> index b47f7cfdcaa4..fbe3cbebec16 100644
> --- a/fs/notify/group.c
> +++ b/fs/notify/group.c
> @@ -45,9 +45,9 @@ static void fsnotify_final_destroy_group(struct fsnotify_group *group)
>   */
>  void fsnotify_group_stop_queueing(struct fsnotify_group *group)
>  {
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	group->shutdown = true;
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  }
>  
>  /*
> @@ -125,7 +125,7 @@ struct fsnotify_group *fsnotify_alloc_group(const struct fsnotify_ops *ops)
>  	atomic_set(&group->refcnt, 1);
>  	atomic_set(&group->num_marks, 0);
>  
> -	mutex_init(&group->notification_mutex);
> +	spin_lock_init(&group->notification_lock);
>  	INIT_LIST_HEAD(&group->notification_list);
>  	init_waitqueue_head(&group->notification_waitq);
>  	group->max_events = UINT_MAX;
> diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
> index b8d08d0d0a4d..69d1ea3d292a 100644
> --- a/fs/notify/inotify/inotify_user.c
> +++ b/fs/notify/inotify/inotify_user.c
> @@ -115,10 +115,10 @@ static unsigned int inotify_poll(struct file *file, poll_table *wait)
>  	int ret = 0;
>  
>  	poll_wait(file, &group->notification_waitq, wait);
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	if (!fsnotify_notify_queue_is_empty(group))
>  		ret = POLLIN | POLLRDNORM;
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  
>  	return ret;
>  }
> @@ -138,7 +138,7 @@ static int round_event_name_len(struct fsnotify_event *fsn_event)
>   * enough to fit in "count". Return an error pointer if
>   * not large enough.
>   *
> - * Called with the group->notification_mutex held.
> + * Called with the group->notification_lock held.
>   */
>  static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
>  					    size_t count)
> @@ -157,7 +157,7 @@ static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
>  	if (event_size > count)
>  		return ERR_PTR(-EINVAL);
>  
> -	/* held the notification_mutex the whole time, so this is the
> +	/* held the notification_lock the whole time, so this is the
>  	 * same event we peeked above */
>  	fsnotify_remove_first_event(group);
>  
> @@ -234,9 +234,9 @@ static ssize_t inotify_read(struct file *file, char __user *buf,
>  
>  	add_wait_queue(&group->notification_waitq, &wait);
>  	while (1) {
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  		kevent = get_one_event(group, count);
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  
>  		pr_debug("%s: group=%p kevent=%p\n", __func__, group, kevent);
>  
> @@ -300,13 +300,13 @@ static long inotify_ioctl(struct file *file, unsigned int cmd,
>  
>  	switch (cmd) {
>  	case FIONREAD:
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  		list_for_each_entry(fsn_event, &group->notification_list,
>  				    list) {
>  			send_len += sizeof(struct inotify_event);
>  			send_len += round_event_name_len(fsn_event);
>  		}
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  		ret = put_user(send_len, (int __user *) p);
>  		break;
>  	}
> diff --git a/fs/notify/notification.c b/fs/notify/notification.c
> index 7d563dea52a4..8a7a8cd041e8 100644
> --- a/fs/notify/notification.c
> +++ b/fs/notify/notification.c
> @@ -63,7 +63,8 @@ EXPORT_SYMBOL_GPL(fsnotify_get_cookie);
>  /* return true if the notify queue is empty, false otherwise */
>  bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group)
>  {
> -	BUG_ON(!mutex_is_locked(&group->notification_mutex));
> +	BUG_ON(IS_ENABLED(CONFIG_SMP) &&
> +	       !spin_is_locked(&group->notification_lock));
>  	return list_empty(&group->notification_list) ? true : false;
>  }
>  
> @@ -95,10 +96,10 @@ int fsnotify_add_event(struct fsnotify_group *group,
>  
>  	pr_debug("%s: group=%p event=%p\n", __func__, group, event);
>  
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  
>  	if (group->shutdown) {
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  		return 2;
>  	}
>  
> @@ -106,7 +107,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
>  		ret = 2;
>  		/* Queue overflow event only if it isn't already queued */
>  		if (!list_empty(&group->overflow_event->list)) {
> -			mutex_unlock(&group->notification_mutex);
> +			spin_unlock(&group->notification_lock);
>  			return ret;
>  		}
>  		event = group->overflow_event;
> @@ -116,7 +117,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
>  	if (!list_empty(list) && merge) {
>  		ret = merge(list, event);
>  		if (ret) {
> -			mutex_unlock(&group->notification_mutex);
> +			spin_unlock(&group->notification_lock);
>  			return ret;
>  		}
>  	}
> @@ -124,7 +125,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
>  queue:
>  	group->q_len++;
>  	list_add_tail(&event->list, list);
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  
>  	wake_up(&group->notification_waitq);
>  	kill_fasync(&group->fsn_fa, SIGIO, POLL_IN);
> @@ -139,7 +140,8 @@ struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
>  {
>  	struct fsnotify_event *event;
>  
> -	BUG_ON(!mutex_is_locked(&group->notification_mutex));
> +	BUG_ON(IS_ENABLED(CONFIG_SMP) &&
> +	       !spin_is_locked(&group->notification_lock));
>  
>  	pr_debug("%s: group=%p\n", __func__, group);
>  
> @@ -161,7 +163,8 @@ struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
>   */
>  struct fsnotify_event *fsnotify_peek_first_event(struct fsnotify_group *group)
>  {
> -	BUG_ON(!mutex_is_locked(&group->notification_mutex));
> +	BUG_ON(IS_ENABLED(CONFIG_SMP) &&
> +	       !spin_is_locked(&group->notification_lock));
>  
>  	return list_first_entry(&group->notification_list,
>  				struct fsnotify_event, list);
> @@ -175,14 +178,14 @@ void fsnotify_flush_notify(struct fsnotify_group *group)
>  {
>  	struct fsnotify_event *event;
>  
> -	mutex_lock(&group->notification_mutex);
> +	spin_lock(&group->notification_lock);
>  	while (!fsnotify_notify_queue_is_empty(group)) {
>  		event = fsnotify_remove_first_event(group);
> -		mutex_unlock(&group->notification_mutex);
> +		spin_unlock(&group->notification_lock);
>  		fsnotify_destroy_event(group, event);
> -		mutex_lock(&group->notification_mutex);
> +		spin_lock(&group->notification_lock);
>  	}
> -	mutex_unlock(&group->notification_mutex);
> +	spin_unlock(&group->notification_lock);
>  }
>  
>  /*
> diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
> index 7268ed076be8..0713e873b1c9 100644
> --- a/include/linux/fsnotify_backend.h
> +++ b/include/linux/fsnotify_backend.h
> @@ -135,7 +135,7 @@ struct fsnotify_group {
>  	const struct fsnotify_ops *ops;	/* how this group handles things */
>  
>  	/* needed to send notification to userspace */
> -	struct mutex notification_mutex;	/* protect the notification_list */
> +	spinlock_t notification_lock;		/* protect the notification_list */
>  	struct list_head notification_list;	/* list of event_holder this group needs to send to userspace */
>  	wait_queue_head_t notification_waitq;	/* read() on the notification file blocks on this waitq */
>  	unsigned int q_len;			/* events on the queue */
> -- 
> 2.6.6
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock
@ 2016-09-16 13:12 Jan Kara
  2016-09-16 15:55 ` Guenter Roeck
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Kara @ 2016-09-16 13:12 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-fsdevel, Miklos Szeredi, Lino Sanfilippo, Eric Paris,
	Guenter Roeck, Jan Kara

notification_mutex is used to protect the list of pending events. As
such there's no reason to use a sleeping lock for it. Convert it to a
spinlock.

Reviewed-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/notify/fanotify/fanotify_user.c | 27 ++++++++++++++-------------
 fs/notify/group.c                  |  6 +++---
 fs/notify/inotify/inotify_user.c   | 16 ++++++++--------
 fs/notify/notification.c           | 27 +++++++++++++++------------
 include/linux/fsnotify_backend.h   |  2 +-
 5 files changed, 41 insertions(+), 37 deletions(-)

This is a fixed version of the patch that fixes the BUG_ON hitting on UP
kernels.

diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
index 46d135c4988f..80091a5dc8c0 100644
--- a/fs/notify/fanotify/fanotify_user.c
+++ b/fs/notify/fanotify/fanotify_user.c
@@ -49,12 +49,13 @@ struct kmem_cache *fanotify_perm_event_cachep __read_mostly;
  * enough to fit in "count". Return an error pointer if the count
  * is not large enough.
  *
- * Called with the group->notification_mutex held.
+ * Called with the group->notification_lock held.
  */
 static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
 					    size_t count)
 {
-	BUG_ON(!mutex_is_locked(&group->notification_mutex));
+	BUG_ON(IS_ENABLED(CONFIG_SMP) &&
+	       !spin_is_locked(&group->notification_lock));
 
 	pr_debug("%s: group=%p count=%zd\n", __func__, group, count);
 
@@ -64,7 +65,7 @@ static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
 	if (FAN_EVENT_METADATA_LEN > count)
 		return ERR_PTR(-EINVAL);
 
-	/* held the notification_mutex the whole time, so this is the
+	/* held the notification_lock the whole time, so this is the
 	 * same event we peeked above */
 	return fsnotify_remove_first_event(group);
 }
@@ -244,10 +245,10 @@ static unsigned int fanotify_poll(struct file *file, poll_table *wait)
 	int ret = 0;
 
 	poll_wait(file, &group->notification_waitq, wait);
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	if (!fsnotify_notify_queue_is_empty(group))
 		ret = POLLIN | POLLRDNORM;
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 
 	return ret;
 }
@@ -268,9 +269,9 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
 
 	add_wait_queue(&group->notification_waitq, &wait);
 	while (1) {
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 		kevent = get_one_event(group, count);
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 
 		if (IS_ERR(kevent)) {
 			ret = PTR_ERR(kevent);
@@ -387,17 +388,17 @@ static int fanotify_release(struct inode *ignored, struct file *file)
 	 * dequeue them and set the response. They will be freed once the
 	 * response is consumed and fanotify_get_response() returns.
 	 */
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	while (!fsnotify_notify_queue_is_empty(group)) {
 		fsn_event = fsnotify_remove_first_event(group);
 		if (!(fsn_event->mask & FAN_ALL_PERM_EVENTS)) {
-			mutex_unlock(&group->notification_mutex);
+			spin_unlock(&group->notification_lock);
 			fsnotify_destroy_event(group, fsn_event);
-			mutex_lock(&group->notification_mutex);
+			spin_lock(&group->notification_lock);
 		} else
 			FANOTIFY_PE(fsn_event)->response = FAN_ALLOW;
 	}
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 
 	/* Response for all permission events it set, wakeup waiters */
 	wake_up(&group->fanotify_data.access_waitq);
@@ -423,10 +424,10 @@ static long fanotify_ioctl(struct file *file, unsigned int cmd, unsigned long ar
 
 	switch (cmd) {
 	case FIONREAD:
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 		list_for_each_entry(fsn_event, &group->notification_list, list)
 			send_len += FAN_EVENT_METADATA_LEN;
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 		ret = put_user(send_len, (int __user *) p);
 		break;
 	}
diff --git a/fs/notify/group.c b/fs/notify/group.c
index b47f7cfdcaa4..fbe3cbebec16 100644
--- a/fs/notify/group.c
+++ b/fs/notify/group.c
@@ -45,9 +45,9 @@ static void fsnotify_final_destroy_group(struct fsnotify_group *group)
  */
 void fsnotify_group_stop_queueing(struct fsnotify_group *group)
 {
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	group->shutdown = true;
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 }
 
 /*
@@ -125,7 +125,7 @@ struct fsnotify_group *fsnotify_alloc_group(const struct fsnotify_ops *ops)
 	atomic_set(&group->refcnt, 1);
 	atomic_set(&group->num_marks, 0);
 
-	mutex_init(&group->notification_mutex);
+	spin_lock_init(&group->notification_lock);
 	INIT_LIST_HEAD(&group->notification_list);
 	init_waitqueue_head(&group->notification_waitq);
 	group->max_events = UINT_MAX;
diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
index b8d08d0d0a4d..69d1ea3d292a 100644
--- a/fs/notify/inotify/inotify_user.c
+++ b/fs/notify/inotify/inotify_user.c
@@ -115,10 +115,10 @@ static unsigned int inotify_poll(struct file *file, poll_table *wait)
 	int ret = 0;
 
 	poll_wait(file, &group->notification_waitq, wait);
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	if (!fsnotify_notify_queue_is_empty(group))
 		ret = POLLIN | POLLRDNORM;
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 
 	return ret;
 }
@@ -138,7 +138,7 @@ static int round_event_name_len(struct fsnotify_event *fsn_event)
  * enough to fit in "count". Return an error pointer if
  * not large enough.
  *
- * Called with the group->notification_mutex held.
+ * Called with the group->notification_lock held.
  */
 static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
 					    size_t count)
@@ -157,7 +157,7 @@ static struct fsnotify_event *get_one_event(struct fsnotify_group *group,
 	if (event_size > count)
 		return ERR_PTR(-EINVAL);
 
-	/* held the notification_mutex the whole time, so this is the
+	/* held the notification_lock the whole time, so this is the
 	 * same event we peeked above */
 	fsnotify_remove_first_event(group);
 
@@ -234,9 +234,9 @@ static ssize_t inotify_read(struct file *file, char __user *buf,
 
 	add_wait_queue(&group->notification_waitq, &wait);
 	while (1) {
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 		kevent = get_one_event(group, count);
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 
 		pr_debug("%s: group=%p kevent=%p\n", __func__, group, kevent);
 
@@ -300,13 +300,13 @@ static long inotify_ioctl(struct file *file, unsigned int cmd,
 
 	switch (cmd) {
 	case FIONREAD:
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 		list_for_each_entry(fsn_event, &group->notification_list,
 				    list) {
 			send_len += sizeof(struct inotify_event);
 			send_len += round_event_name_len(fsn_event);
 		}
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 		ret = put_user(send_len, (int __user *) p);
 		break;
 	}
diff --git a/fs/notify/notification.c b/fs/notify/notification.c
index 7d563dea52a4..8a7a8cd041e8 100644
--- a/fs/notify/notification.c
+++ b/fs/notify/notification.c
@@ -63,7 +63,8 @@ EXPORT_SYMBOL_GPL(fsnotify_get_cookie);
 /* return true if the notify queue is empty, false otherwise */
 bool fsnotify_notify_queue_is_empty(struct fsnotify_group *group)
 {
-	BUG_ON(!mutex_is_locked(&group->notification_mutex));
+	BUG_ON(IS_ENABLED(CONFIG_SMP) &&
+	       !spin_is_locked(&group->notification_lock));
 	return list_empty(&group->notification_list) ? true : false;
 }
 
@@ -95,10 +96,10 @@ int fsnotify_add_event(struct fsnotify_group *group,
 
 	pr_debug("%s: group=%p event=%p\n", __func__, group, event);
 
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 
 	if (group->shutdown) {
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 		return 2;
 	}
 
@@ -106,7 +107,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
 		ret = 2;
 		/* Queue overflow event only if it isn't already queued */
 		if (!list_empty(&group->overflow_event->list)) {
-			mutex_unlock(&group->notification_mutex);
+			spin_unlock(&group->notification_lock);
 			return ret;
 		}
 		event = group->overflow_event;
@@ -116,7 +117,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
 	if (!list_empty(list) && merge) {
 		ret = merge(list, event);
 		if (ret) {
-			mutex_unlock(&group->notification_mutex);
+			spin_unlock(&group->notification_lock);
 			return ret;
 		}
 	}
@@ -124,7 +125,7 @@ int fsnotify_add_event(struct fsnotify_group *group,
 queue:
 	group->q_len++;
 	list_add_tail(&event->list, list);
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 
 	wake_up(&group->notification_waitq);
 	kill_fasync(&group->fsn_fa, SIGIO, POLL_IN);
@@ -139,7 +140,8 @@ struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
 {
 	struct fsnotify_event *event;
 
-	BUG_ON(!mutex_is_locked(&group->notification_mutex));
+	BUG_ON(IS_ENABLED(CONFIG_SMP) &&
+	       !spin_is_locked(&group->notification_lock));
 
 	pr_debug("%s: group=%p\n", __func__, group);
 
@@ -161,7 +163,8 @@ struct fsnotify_event *fsnotify_remove_first_event(struct fsnotify_group *group)
  */
 struct fsnotify_event *fsnotify_peek_first_event(struct fsnotify_group *group)
 {
-	BUG_ON(!mutex_is_locked(&group->notification_mutex));
+	BUG_ON(IS_ENABLED(CONFIG_SMP) &&
+	       !spin_is_locked(&group->notification_lock));
 
 	return list_first_entry(&group->notification_list,
 				struct fsnotify_event, list);
@@ -175,14 +178,14 @@ void fsnotify_flush_notify(struct fsnotify_group *group)
 {
 	struct fsnotify_event *event;
 
-	mutex_lock(&group->notification_mutex);
+	spin_lock(&group->notification_lock);
 	while (!fsnotify_notify_queue_is_empty(group)) {
 		event = fsnotify_remove_first_event(group);
-		mutex_unlock(&group->notification_mutex);
+		spin_unlock(&group->notification_lock);
 		fsnotify_destroy_event(group, event);
-		mutex_lock(&group->notification_mutex);
+		spin_lock(&group->notification_lock);
 	}
-	mutex_unlock(&group->notification_mutex);
+	spin_unlock(&group->notification_lock);
 }
 
 /*
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index 7268ed076be8..0713e873b1c9 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -135,7 +135,7 @@ struct fsnotify_group {
 	const struct fsnotify_ops *ops;	/* how this group handles things */
 
 	/* needed to send notification to userspace */
-	struct mutex notification_mutex;	/* protect the notification_list */
+	spinlock_t notification_lock;		/* protect the notification_list */
 	struct list_head notification_list;	/* list of event_holder this group needs to send to userspace */
 	wait_queue_head_t notification_waitq;	/* read() on the notification file blocks on this waitq */
 	unsigned int q_len;			/* events on the queue */
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-09-16 15:55 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-13 20:15 [PATCH 0/6 v2] fsnotify: Fix list corruption for permission events Jan Kara
2016-09-13 20:15 ` [PATCH 1/6] fsnotify: Add a way to stop queueing events on group shutdown Jan Kara
2016-09-13 20:15 ` [PATCH 2/6] fanotify: Fix list corruption in fanotify_get_response() Jan Kara
2016-09-13 20:15 ` [PATCH 3/6] fsnotify: Drop notification_mutex before destroying event Jan Kara
2016-09-14 17:11   ` Lino Sanfilippo
2016-09-13 20:15 ` [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock Jan Kara
2016-09-14 17:13   ` Lino Sanfilippo
2016-09-13 20:15 ` [PATCH 5/6] fanotify: Use notification_lock instead of access_lock Jan Kara
2016-09-14 17:14   ` Lino Sanfilippo
2016-09-13 20:15 ` [PATCH 6/6] fanotify: Fix possible false warning when freeing events Jan Kara
2016-09-14 17:14   ` Lino Sanfilippo
2016-09-16 13:12 [PATCH 4/6] fsnotify: Convert notification_mutex to a spinlock Jan Kara
2016-09-16 15:55 ` Guenter Roeck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).