linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -next/-mmotm] kernel/user.c: fix build when POLL not enabled
@ 2021-08-03 20:46 Randy Dunlap
  2021-08-04  4:59 ` Nicholas Piggin
  0 siblings, 1 reply; 4+ messages in thread
From: Randy Dunlap @ 2021-08-03 20:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: Randy Dunlap, Andrew Morton, Nick Piggin, Mark Brown

Fix build errors in kernel/user.c when CONFIG_EPOLL is not set/enabled.

../kernel/user.c: In function ‘free_user’:
../kernel/user.c:141:30: error: ‘struct user_struct’ has no member named ‘epoll_watches’; did you mean ‘nr_watches’?
  percpu_counter_destroy(&up->epoll_watches);
                              ^~~~~~~~~~~~~
In file included from ../include/linux/sched/user.h:7:0,
                 from ../kernel/user.c:17:
../kernel/user.c: In function ‘alloc_uid’:
../kernel/user.c:189:33: error: ‘struct user_struct’ has no member named ‘epoll_watches’; did you mean ‘nr_watches’?
   if (percpu_counter_init(&new->epoll_watches, 0, GFP_KERNEL)) {
                                 ^
../kernel/user.c:203:33: error: ‘struct user_struct’ has no member named ‘epoll_watches’; did you mean ‘nr_watches’?
    percpu_counter_destroy(&new->epoll_watches);
                                 ^~~~~~~~~~~~~
In file included from ../include/linux/sched/user.h:7:0,
                 from ../kernel/user.c:17:
../kernel/user.c: In function ‘uid_cache_init’:
../kernel/user.c:225:37: error: ‘struct user_struct’ has no member named ‘epoll_watches’; did you mean ‘nr_watches’?
  if (percpu_counter_init(&root_user.epoll_watches, 0, GFP_KERNEL))
                                     ^
Also fix type: "cpunter" -> "counter" in a panic message.

Fixes: e75b89477811 ("fs/epoll: use a per-cpu counter for user's watches count")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Mark Brown <broonie@kernel.org>
---
 kernel/user.c |   10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

--- mmotm-2021-0802-1851.orig/kernel/user.c
+++ mmotm-2021-0802-1851/kernel/user.c
@@ -137,9 +137,11 @@ static void free_user(struct user_struct
 	__releases(&uidhash_lock)
 {
 	uid_hash_remove(up);
+#ifdef CONFIG_EPOLL
 	spin_unlock_irqrestore(&uidhash_lock, flags);
 	percpu_counter_destroy(&up->epoll_watches);
 	kmem_cache_free(uid_cachep, up);
+#endif
 }
 
 /*
@@ -186,10 +188,12 @@ struct user_struct *alloc_uid(kuid_t uid
 
 		new->uid = uid;
 		refcount_set(&new->__count, 1);
+#ifdef CONFIG_EPOLL
 		if (percpu_counter_init(&new->epoll_watches, 0, GFP_KERNEL)) {
 			kmem_cache_free(uid_cachep, new);
 			return NULL;
 		}
+#endif
 		ratelimit_state_init(&new->ratelimit, HZ, 100);
 		ratelimit_set_flags(&new->ratelimit, RATELIMIT_MSG_ON_RELEASE);
 
@@ -200,7 +204,9 @@ struct user_struct *alloc_uid(kuid_t uid
 		spin_lock_irq(&uidhash_lock);
 		up = uid_hash_find(uid, hashent);
 		if (up) {
+#ifdef CONFIG_EPOLL
 			percpu_counter_destroy(&new->epoll_watches);
+#endif
 			kmem_cache_free(uid_cachep, new);
 		} else {
 			uid_hash_insert(new, hashent);
@@ -222,8 +228,10 @@ static int __init uid_cache_init(void)
 	for(n = 0; n < UIDHASH_SZ; ++n)
 		INIT_HLIST_HEAD(uidhash_table + n);
 
+#ifdef CONFIG_EPOLL
 	if (percpu_counter_init(&root_user.epoll_watches, 0, GFP_KERNEL))
-		panic("percpu cpunter alloc failed");
+		panic("percpu counter alloc failed");
+#endif
 
 	/* Insert the root user immediately (init already runs as root) */
 	spin_lock_irq(&uidhash_lock);

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-08-04 19:14 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-03 20:46 [PATCH -next/-mmotm] kernel/user.c: fix build when POLL not enabled Randy Dunlap
2021-08-04  4:59 ` Nicholas Piggin
2021-08-04  5:09   ` Randy Dunlap
2021-08-04 19:14   ` Guenter Roeck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).