linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] ipc/sem: Avoid indexing past end of sem_array
@ 2017-05-08 22:23 Kees Cook
  2017-05-14 13:54 ` Manfred Spraul
                   ` (2 more replies)
  0 siblings, 3 replies; 38+ messages in thread
From: Kees Cook @ 2017-05-08 22:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Davidlohr Bueso, Manfred Spraul, Ingo Molnar, Peter Zijlstra,
	Fabian Frederick, linux-kernel

This changes the struct + trailing data pattern to using a void * so that
the end of sem_array is found without possibly indexing past the end which
can upset some static analyzers. Mostly, this ends up avoiding a cast
between different non-void types, which the future randstruct GCC plugin
was warning about.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 ipc/sem.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 947dc2348271..f7cae2b35d62 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -475,6 +475,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 {
 	int id;
 	int retval;
+	void *sem_alloc;
 	struct sem_array *sma;
 	int size;
 	key_t key = params->key;
@@ -488,11 +489,14 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 		return -ENOSPC;
 
 	size = sizeof(*sma) + nsems * sizeof(struct sem);
-	sma = ipc_rcu_alloc(size);
-	if (!sma)
+	sem_alloc = ipc_rcu_alloc(size);
+	if (!sem_alloc)
 		return -ENOMEM;
 
-	memset(sma, 0, size);
+	memset(sem_alloc, 0, size);
+
+	sma = sem_alloc;
+	sma->sem_base = sem_alloc + sizeof(*sma);
 
 	sma->sem_perm.mode = (semflg & S_IRWXUGO);
 	sma->sem_perm.key = key;
@@ -504,8 +508,6 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 		return retval;
 	}
 
-	sma->sem_base = (struct sem *) &sma[1];
-
 	for (i = 0; i < nsems; i++) {
 		INIT_LIST_HEAD(&sma->sem_base[i].pending_alter);
 		INIT_LIST_HEAD(&sma->sem_base[i].pending_const);
-- 
2.7.4


-- 
Kees Cook
Pixel Security

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH] ipc/sem: Avoid indexing past end of sem_array
  2017-05-08 22:23 [PATCH] ipc/sem: Avoid indexing past end of sem_array Kees Cook
@ 2017-05-14 13:54 ` Manfred Spraul
  2017-05-15 17:40   ` Kees Cook
  2017-05-15 17:19 ` [PATCH 0/2] Misc cleanups for ipc Manfred Spraul
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
  2 siblings, 1 reply; 38+ messages in thread
From: Manfred Spraul @ 2017-05-14 13:54 UTC (permalink / raw)
  To: Kees Cook, Andrew Morton
  Cc: Davidlohr Bueso, Ingo Molnar, Peter Zijlstra, Fabian Frederick,
	linux-kernel

[-- Attachment #1: Type: text/plain, Size: 842 bytes --]

Hi Kees,

On 05/09/2017 12:23 AM, Kees Cook wrote:
> This changes the struct + trailing data pattern to using a void * so that
> the end of sem_array is found without possibly indexing past the end which
> can upset some static analyzers. Mostly, this ends up avoiding a cast
> between different non-void types, which the future randstruct GCC plugin
> was warning about.
Two question:
- Would the attached patch work with the randstruct plugin as well?
   If we touch the code, then I would propose that we remove sem_base 
entirely.

- ipc/util.h contains

 > #define ipc_rcu_to_struct(p)  ((void *)(p+1))

Does this trigger a warning with randstruct as well?
If we have to touch it, then I would remove it by merging struct 
kern_ipc_perm and struct ipc_rcu.

And, obviously:
Do you see any issues with the attached patch?
--
     Manfred

[-- Attachment #2: 0001-ipc-sem.c-remove-sem_base-embed-struct-sem.patch --]
[-- Type: text/x-patch, Size: 12758 bytes --]

>From cf680121e348241d75ca01b3d68eeeab93e5c589 Mon Sep 17 00:00:00 2001
From: Manfred Spraul <manfred@colorfullife.com>
Date: Sat, 13 May 2017 18:02:20 +0200
Subject: [PATCH] ipc/sem.c: remove sem_base, embed struct sem

sma->sem_base is initialized with
	sma->sem_base = (struct sem *) &sma[1];

The current code has four problems:
- There is an unnecessary pointer dereference - sem_base is not needed.
- Alignment for struct sem only works by chance.
- The current code causes false positive for static code analysis.
- This is a cast between different non-void types, which the future
  randstruct GCC plugin warns on.

And, as bonus, the code size gets smaller:

Before:
  0 .text         00003770
After:
  0 .text         0000374e

---
RFC:
1+2 are obviously fixed, does it resolve 3+4 as well?

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 include/linux/sem.h | 22 +++++++++++++-
 ipc/sem.c           | 88 +++++++++++++++++++++--------------------------------
 2 files changed, 55 insertions(+), 55 deletions(-)

diff --git a/include/linux/sem.h b/include/linux/sem.h
index 9edec92..c2218a4 100644
--- a/include/linux/sem.h
+++ b/include/linux/sem.h
@@ -8,11 +8,29 @@
 
 struct task_struct;
 
+/* One semaphore structure for each semaphore in the system. */
+struct sem {
+	int	semval;		/* current value */
+	/*
+	 * PID of the process that last modified the semaphore. For
+	 * Linux, specifically these are:
+	 *  - semop
+	 *  - semctl, via SETVAL and SETALL.
+	 *  - at task exit when performing undo adjustments (see exit_sem).
+	 */
+	int	sempid;
+	spinlock_t	lock;	/* spinlock for fine-grained semtimedop */
+	struct list_head pending_alter; /* pending single-sop operations */
+					/* that alter the semaphore */
+	struct list_head pending_const; /* pending single-sop operations */
+					/* that do not alter the semaphore*/
+	time_t	sem_otime;	/* candidate for sem_otime */
+} ____cacheline_aligned_in_smp;
+
 /* One sem_array data structure for each set of semaphores in the system. */
 struct sem_array {
 	struct kern_ipc_perm	sem_perm;	/* permissions .. see ipc.h */
 	time_t			sem_ctime;	/* last change time */
-	struct sem		*sem_base;	/* ptr to first semaphore in array */
 	struct list_head	pending_alter;	/* pending operations */
 						/* that alter the array */
 	struct list_head	pending_const;	/* pending complex operations */
@@ -21,6 +39,8 @@ struct sem_array {
 	int			sem_nsems;	/* no. of semaphores in array */
 	int			complex_count;	/* pending complex operations */
 	unsigned int		use_global_lock;/* >0: global lock required */
+
+	struct sem		sems[0];
 };
 
 #ifdef CONFIG_SYSVIPC
diff --git a/ipc/sem.c b/ipc/sem.c
index 947dc23..fff8337 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -87,24 +87,6 @@
 #include <linux/uaccess.h>
 #include "util.h"
 
-/* One semaphore structure for each semaphore in the system. */
-struct sem {
-	int	semval;		/* current value */
-	/*
-	 * PID of the process that last modified the semaphore. For
-	 * Linux, specifically these are:
-	 *  - semop
-	 *  - semctl, via SETVAL and SETALL.
-	 *  - at task exit when performing undo adjustments (see exit_sem).
-	 */
-	int	sempid;
-	spinlock_t	lock;	/* spinlock for fine-grained semtimedop */
-	struct list_head pending_alter; /* pending single-sop operations */
-					/* that alter the semaphore */
-	struct list_head pending_const; /* pending single-sop operations */
-					/* that do not alter the semaphore*/
-	time_t	sem_otime;	/* candidate for sem_otime */
-} ____cacheline_aligned_in_smp;
 
 /* One queue for each sleeping process in the system. */
 struct sem_queue {
@@ -175,7 +157,7 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
  *	sem_array.sem_undo
  *
  * b) global or semaphore sem_lock() for read/write:
- *	sem_array.sem_base[i].pending_{const,alter}:
+ *	sem_array.sems[i].pending_{const,alter}:
  *
  * c) special:
  *	sem_undo_list.list_proc:
@@ -250,7 +232,7 @@ static void unmerge_queues(struct sem_array *sma)
 	 */
 	list_for_each_entry_safe(q, tq, &sma->pending_alter, list) {
 		struct sem *curr;
-		curr = &sma->sem_base[q->sops[0].sem_num];
+		curr = &sma->sems[q->sops[0].sem_num];
 
 		list_add_tail(&q->list, &curr->pending_alter);
 	}
@@ -270,7 +252,7 @@ static void merge_queues(struct sem_array *sma)
 {
 	int i;
 	for (i = 0; i < sma->sem_nsems; i++) {
-		struct sem *sem = sma->sem_base + i;
+		struct sem *sem = &sma->sems[i];
 
 		list_splice_init(&sem->pending_alter, &sma->pending_alter);
 	}
@@ -306,7 +288,7 @@ static void complexmode_enter(struct sem_array *sma)
 	sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS;
 
 	for (i = 0; i < sma->sem_nsems; i++) {
-		sem = sma->sem_base + i;
+		sem = &sma->sems[i];
 		spin_lock(&sem->lock);
 		spin_unlock(&sem->lock);
 	}
@@ -366,7 +348,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
 	 *
 	 * Both facts are tracked by use_global_mode.
 	 */
-	sem = sma->sem_base + sops->sem_num;
+	sem = &sma->sems[sops->sem_num];
 
 	/*
 	 * Initial check for use_global_lock. Just an optimization,
@@ -421,7 +403,7 @@ static inline void sem_unlock(struct sem_array *sma, int locknum)
 		complexmode_tryleave(sma);
 		ipc_unlock_object(&sma->sem_perm);
 	} else {
-		struct sem *sem = sma->sem_base + locknum;
+		struct sem *sem = &sma->sems[locknum];
 		spin_unlock(&sem->lock);
 	}
 }
@@ -487,7 +469,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	if (ns->used_sems + nsems > ns->sc_semmns)
 		return -ENOSPC;
 
-	size = sizeof(*sma) + nsems * sizeof(struct sem);
+	size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
 	sma = ipc_rcu_alloc(size);
 	if (!sma)
 		return -ENOMEM;
@@ -504,12 +486,10 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 		return retval;
 	}
 
-	sma->sem_base = (struct sem *) &sma[1];
-
 	for (i = 0; i < nsems; i++) {
-		INIT_LIST_HEAD(&sma->sem_base[i].pending_alter);
-		INIT_LIST_HEAD(&sma->sem_base[i].pending_const);
-		spin_lock_init(&sma->sem_base[i].lock);
+		INIT_LIST_HEAD(&sma->sems[i].pending_alter);
+		INIT_LIST_HEAD(&sma->sems[i].pending_const);
+		spin_lock_init(&sma->sems[i].lock);
 	}
 
 	sma->complex_count = 0;
@@ -612,7 +592,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
 	un = q->undo;
 
 	for (sop = sops; sop < sops + nsops; sop++) {
-		curr = sma->sem_base + sop->sem_num;
+		curr = &sma->sems[sop->sem_num];
 		sem_op = sop->sem_op;
 		result = curr->semval;
 
@@ -639,7 +619,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
 	sop--;
 	pid = q->pid;
 	while (sop >= sops) {
-		sma->sem_base[sop->sem_num].sempid = pid;
+		sma->sems[sop->sem_num].sempid = pid;
 		sop--;
 	}
 
@@ -661,7 +641,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
 	sop--;
 	while (sop >= sops) {
 		sem_op = sop->sem_op;
-		sma->sem_base[sop->sem_num].semval -= sem_op;
+		sma->sems[sop->sem_num].semval -= sem_op;
 		if (sop->sem_flg & SEM_UNDO)
 			un->semadj[sop->sem_num] += sem_op;
 		sop--;
@@ -692,7 +672,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
 	 * until the operations can go through.
 	 */
 	for (sop = sops; sop < sops + nsops; sop++) {
-		curr = sma->sem_base + sop->sem_num;
+		curr = &sma->sems[sop->sem_num];
 		sem_op = sop->sem_op;
 		result = curr->semval;
 
@@ -716,7 +696,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
 	}
 
 	for (sop = sops; sop < sops + nsops; sop++) {
-		curr = sma->sem_base + sop->sem_num;
+		curr = &sma->sems[sop->sem_num];
 		sem_op = sop->sem_op;
 		result = curr->semval;
 
@@ -815,7 +795,7 @@ static int wake_const_ops(struct sem_array *sma, int semnum,
 	if (semnum == -1)
 		pending_list = &sma->pending_const;
 	else
-		pending_list = &sma->sem_base[semnum].pending_const;
+		pending_list = &sma->sems[semnum].pending_const;
 
 	list_for_each_entry_safe(q, tmp, pending_list, list) {
 		int error = perform_atomic_semop(sma, q);
@@ -856,7 +836,7 @@ static int do_smart_wakeup_zero(struct sem_array *sma, struct sembuf *sops,
 		for (i = 0; i < nsops; i++) {
 			int num = sops[i].sem_num;
 
-			if (sma->sem_base[num].semval == 0) {
+			if (sma->sems[num].semval == 0) {
 				got_zero = 1;
 				semop_completed |= wake_const_ops(sma, num, wake_q);
 			}
@@ -867,7 +847,7 @@ static int do_smart_wakeup_zero(struct sem_array *sma, struct sembuf *sops,
 		 * Assume all were changed.
 		 */
 		for (i = 0; i < sma->sem_nsems; i++) {
-			if (sma->sem_base[i].semval == 0) {
+			if (sma->sems[i].semval == 0) {
 				got_zero = 1;
 				semop_completed |= wake_const_ops(sma, i, wake_q);
 			}
@@ -909,7 +889,7 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
 	if (semnum == -1)
 		pending_list = &sma->pending_alter;
 	else
-		pending_list = &sma->sem_base[semnum].pending_alter;
+		pending_list = &sma->sems[semnum].pending_alter;
 
 again:
 	list_for_each_entry_safe(q, tmp, pending_list, list) {
@@ -922,7 +902,7 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
 		 * be in the  per semaphore pending queue, and decrements
 		 * cannot be successful if the value is already 0.
 		 */
-		if (semnum != -1 && sma->sem_base[semnum].semval == 0)
+		if (semnum != -1 && sma->sems[semnum].semval == 0)
 			break;
 
 		error = perform_atomic_semop(sma, q);
@@ -959,9 +939,9 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
 static void set_semotime(struct sem_array *sma, struct sembuf *sops)
 {
 	if (sops == NULL) {
-		sma->sem_base[0].sem_otime = get_seconds();
+		sma->sems[0].sem_otime = get_seconds();
 	} else {
-		sma->sem_base[sops[0].sem_num].sem_otime =
+		sma->sems[sops[0].sem_num].sem_otime =
 							get_seconds();
 	}
 }
@@ -1067,9 +1047,9 @@ static int count_semcnt(struct sem_array *sma, ushort semnum,
 	semcnt = 0;
 	/* First: check the simple operations. They are easy to evaluate */
 	if (count_zero)
-		l = &sma->sem_base[semnum].pending_const;
+		l = &sma->sems[semnum].pending_const;
 	else
-		l = &sma->sem_base[semnum].pending_alter;
+		l = &sma->sems[semnum].pending_alter;
 
 	list_for_each_entry(q, l, list) {
 		/* all task on a per-semaphore list sleep on exactly
@@ -1124,7 +1104,7 @@ static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
 		wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
 	}
 	for (i = 0; i < sma->sem_nsems; i++) {
-		struct sem *sem = sma->sem_base + i;
+		struct sem *sem = &sma->sems[i];
 		list_for_each_entry_safe(q, tq, &sem->pending_const, list) {
 			unlink_queue(sma, q);
 			wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
@@ -1174,9 +1154,9 @@ static time_t get_semotime(struct sem_array *sma)
 	int i;
 	time_t res;
 
-	res = sma->sem_base[0].sem_otime;
+	res = sma->sems[0].sem_otime;
 	for (i = 1; i < sma->sem_nsems; i++) {
-		time_t to = sma->sem_base[i].sem_otime;
+		time_t to = sma->sems[i].sem_otime;
 
 		if (to > res)
 			res = to;
@@ -1325,7 +1305,7 @@ static int semctl_setval(struct ipc_namespace *ns, int semid, int semnum,
 		return -EIDRM;
 	}
 
-	curr = &sma->sem_base[semnum];
+	curr = &sma->sems[semnum];
 
 	ipc_assert_locked_object(&sma->sem_perm);
 	list_for_each_entry(un, &sma->list_id, list_id)
@@ -1402,7 +1382,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 			}
 		}
 		for (i = 0; i < sma->sem_nsems; i++)
-			sem_io[i] = sma->sem_base[i].semval;
+			sem_io[i] = sma->sems[i].semval;
 		sem_unlock(sma, -1);
 		rcu_read_unlock();
 		err = 0;
@@ -1450,8 +1430,8 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		}
 
 		for (i = 0; i < nsems; i++) {
-			sma->sem_base[i].semval = sem_io[i];
-			sma->sem_base[i].sempid = task_tgid_vnr(current);
+			sma->sems[i].semval = sem_io[i];
+			sma->sems[i].sempid = task_tgid_vnr(current);
 		}
 
 		ipc_assert_locked_object(&sma->sem_perm);
@@ -1476,7 +1456,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		err = -EIDRM;
 		goto out_unlock;
 	}
-	curr = &sma->sem_base[semnum];
+	curr = &sma->sems[semnum];
 
 	switch (cmd) {
 	case GETVAL:
@@ -1932,7 +1912,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
 	 */
 	if (nsops == 1) {
 		struct sem *curr;
-		curr = &sma->sem_base[sops->sem_num];
+		curr = &sma->sems[sops->sem_num];
 
 		if (alter) {
 			if (sma->complex_count) {
@@ -2146,7 +2126,7 @@ void exit_sem(struct task_struct *tsk)
 
 		/* perform adjustments registered in un */
 		for (i = 0; i < sma->sem_nsems; i++) {
-			struct sem *semaphore = &sma->sem_base[i];
+			struct sem *semaphore = &sma->sems[i];
 			if (un->semadj[i]) {
 				semaphore->semval += un->semadj[i];
 				/*
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 0/2] Misc cleanups for ipc
  2017-05-08 22:23 [PATCH] ipc/sem: Avoid indexing past end of sem_array Kees Cook
  2017-05-14 13:54 ` Manfred Spraul
@ 2017-05-15 17:19 ` Manfred Spraul
  2017-05-15 17:19   ` [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
                     ` (2 more replies)
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
  2 siblings, 3 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-15 17:19 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

Hi all,

could you check the following three patches?

As explained, I try to combine the changes for static analyzers and for
the randstruct gcc plugin with cleanups.

0001-ipc-sem.c-remove-sem_base-embed-struct-sem.patch:
	sem_base is not needed. Instead of improving the casts,
	remove it entirely.

0002-ipc-merge-ipc_rcu-and-kern_ipc_perm.patch
	struct ipc_rcu and struct kern_ipc_perm are always used together.
	Thus merge them.

0003-include-linux-sem.h-Correctly-document-sem_ctime.patch
	Noticed by chance: The comment regarding sem_ctime is wrong/
	misleading.
	And man semctl(2) contains the same wrong comment.

Especially:
@Kees: Do the patches resolve all warnings?

@Andrew: Could you add them to your tree?
And can we still aim for 4.13?

@Michael: 
Should we update man semctl(2)?
Several years ago, I did a review and found that sem_ctime is only
for Coherent the time of the last change...

http://calculix-rpm.sourceforge.net/sysvsem.html

--
        Manfred

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem
  2017-05-15 17:19 ` [PATCH 0/2] Misc cleanups for ipc Manfred Spraul
@ 2017-05-15 17:19   ` Manfred Spraul
  2017-05-15 20:08     ` Andrew Morton
  2017-05-16  5:46     ` Christoph Hellwig
  2017-05-15 17:19   ` [PATCH 2/3] ipc: merge ipc_rcu and kern_ipc_perm Manfred Spraul
  2017-05-15 17:19   ` [PATCH 3/3] include/linux/sem.h: Correctly document sem_ctime Manfred Spraul
  2 siblings, 2 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-15 17:19 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

sma->sem_base is initialized with
	sma->sem_base = (struct sem *) &sma[1];

The current code has four problems:
- There is an unnecessary pointer dereference - sem_base is not needed.
- Alignment for struct sem only works by chance.
- The current code causes false positive for static code analysis.
- This is a cast between different non-void types, which the future
  randstruct GCC plugin warns on.

And, as bonus, the code size gets smaller:

Before:
  0 .text         00003770
After:
  0 .text         0000374e

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 include/linux/sem.h | 22 +++++++++++++-
 ipc/sem.c           | 88 +++++++++++++++++++++--------------------------------
 2 files changed, 55 insertions(+), 55 deletions(-)

diff --git a/include/linux/sem.h b/include/linux/sem.h
index 9edec92..c2218a4 100644
--- a/include/linux/sem.h
+++ b/include/linux/sem.h
@@ -8,11 +8,29 @@
 
 struct task_struct;
 
+/* One semaphore structure for each semaphore in the system. */
+struct sem {
+	int	semval;		/* current value */
+	/*
+	 * PID of the process that last modified the semaphore. For
+	 * Linux, specifically these are:
+	 *  - semop
+	 *  - semctl, via SETVAL and SETALL.
+	 *  - at task exit when performing undo adjustments (see exit_sem).
+	 */
+	int	sempid;
+	spinlock_t	lock;	/* spinlock for fine-grained semtimedop */
+	struct list_head pending_alter; /* pending single-sop operations */
+					/* that alter the semaphore */
+	struct list_head pending_const; /* pending single-sop operations */
+					/* that do not alter the semaphore*/
+	time_t	sem_otime;	/* candidate for sem_otime */
+} ____cacheline_aligned_in_smp;
+
 /* One sem_array data structure for each set of semaphores in the system. */
 struct sem_array {
 	struct kern_ipc_perm	sem_perm;	/* permissions .. see ipc.h */
 	time_t			sem_ctime;	/* last change time */
-	struct sem		*sem_base;	/* ptr to first semaphore in array */
 	struct list_head	pending_alter;	/* pending operations */
 						/* that alter the array */
 	struct list_head	pending_const;	/* pending complex operations */
@@ -21,6 +39,8 @@ struct sem_array {
 	int			sem_nsems;	/* no. of semaphores in array */
 	int			complex_count;	/* pending complex operations */
 	unsigned int		use_global_lock;/* >0: global lock required */
+
+	struct sem		sems[0];
 };
 
 #ifdef CONFIG_SYSVIPC
diff --git a/ipc/sem.c b/ipc/sem.c
index 947dc23..fff8337 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -87,24 +87,6 @@
 #include <linux/uaccess.h>
 #include "util.h"
 
-/* One semaphore structure for each semaphore in the system. */
-struct sem {
-	int	semval;		/* current value */
-	/*
-	 * PID of the process that last modified the semaphore. For
-	 * Linux, specifically these are:
-	 *  - semop
-	 *  - semctl, via SETVAL and SETALL.
-	 *  - at task exit when performing undo adjustments (see exit_sem).
-	 */
-	int	sempid;
-	spinlock_t	lock;	/* spinlock for fine-grained semtimedop */
-	struct list_head pending_alter; /* pending single-sop operations */
-					/* that alter the semaphore */
-	struct list_head pending_const; /* pending single-sop operations */
-					/* that do not alter the semaphore*/
-	time_t	sem_otime;	/* candidate for sem_otime */
-} ____cacheline_aligned_in_smp;
 
 /* One queue for each sleeping process in the system. */
 struct sem_queue {
@@ -175,7 +157,7 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
  *	sem_array.sem_undo
  *
  * b) global or semaphore sem_lock() for read/write:
- *	sem_array.sem_base[i].pending_{const,alter}:
+ *	sem_array.sems[i].pending_{const,alter}:
  *
  * c) special:
  *	sem_undo_list.list_proc:
@@ -250,7 +232,7 @@ static void unmerge_queues(struct sem_array *sma)
 	 */
 	list_for_each_entry_safe(q, tq, &sma->pending_alter, list) {
 		struct sem *curr;
-		curr = &sma->sem_base[q->sops[0].sem_num];
+		curr = &sma->sems[q->sops[0].sem_num];
 
 		list_add_tail(&q->list, &curr->pending_alter);
 	}
@@ -270,7 +252,7 @@ static void merge_queues(struct sem_array *sma)
 {
 	int i;
 	for (i = 0; i < sma->sem_nsems; i++) {
-		struct sem *sem = sma->sem_base + i;
+		struct sem *sem = &sma->sems[i];
 
 		list_splice_init(&sem->pending_alter, &sma->pending_alter);
 	}
@@ -306,7 +288,7 @@ static void complexmode_enter(struct sem_array *sma)
 	sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS;
 
 	for (i = 0; i < sma->sem_nsems; i++) {
-		sem = sma->sem_base + i;
+		sem = &sma->sems[i];
 		spin_lock(&sem->lock);
 		spin_unlock(&sem->lock);
 	}
@@ -366,7 +348,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
 	 *
 	 * Both facts are tracked by use_global_mode.
 	 */
-	sem = sma->sem_base + sops->sem_num;
+	sem = &sma->sems[sops->sem_num];
 
 	/*
 	 * Initial check for use_global_lock. Just an optimization,
@@ -421,7 +403,7 @@ static inline void sem_unlock(struct sem_array *sma, int locknum)
 		complexmode_tryleave(sma);
 		ipc_unlock_object(&sma->sem_perm);
 	} else {
-		struct sem *sem = sma->sem_base + locknum;
+		struct sem *sem = &sma->sems[locknum];
 		spin_unlock(&sem->lock);
 	}
 }
@@ -487,7 +469,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	if (ns->used_sems + nsems > ns->sc_semmns)
 		return -ENOSPC;
 
-	size = sizeof(*sma) + nsems * sizeof(struct sem);
+	size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
 	sma = ipc_rcu_alloc(size);
 	if (!sma)
 		return -ENOMEM;
@@ -504,12 +486,10 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 		return retval;
 	}
 
-	sma->sem_base = (struct sem *) &sma[1];
-
 	for (i = 0; i < nsems; i++) {
-		INIT_LIST_HEAD(&sma->sem_base[i].pending_alter);
-		INIT_LIST_HEAD(&sma->sem_base[i].pending_const);
-		spin_lock_init(&sma->sem_base[i].lock);
+		INIT_LIST_HEAD(&sma->sems[i].pending_alter);
+		INIT_LIST_HEAD(&sma->sems[i].pending_const);
+		spin_lock_init(&sma->sems[i].lock);
 	}
 
 	sma->complex_count = 0;
@@ -612,7 +592,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
 	un = q->undo;
 
 	for (sop = sops; sop < sops + nsops; sop++) {
-		curr = sma->sem_base + sop->sem_num;
+		curr = &sma->sems[sop->sem_num];
 		sem_op = sop->sem_op;
 		result = curr->semval;
 
@@ -639,7 +619,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
 	sop--;
 	pid = q->pid;
 	while (sop >= sops) {
-		sma->sem_base[sop->sem_num].sempid = pid;
+		sma->sems[sop->sem_num].sempid = pid;
 		sop--;
 	}
 
@@ -661,7 +641,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
 	sop--;
 	while (sop >= sops) {
 		sem_op = sop->sem_op;
-		sma->sem_base[sop->sem_num].semval -= sem_op;
+		sma->sems[sop->sem_num].semval -= sem_op;
 		if (sop->sem_flg & SEM_UNDO)
 			un->semadj[sop->sem_num] += sem_op;
 		sop--;
@@ -692,7 +672,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
 	 * until the operations can go through.
 	 */
 	for (sop = sops; sop < sops + nsops; sop++) {
-		curr = sma->sem_base + sop->sem_num;
+		curr = &sma->sems[sop->sem_num];
 		sem_op = sop->sem_op;
 		result = curr->semval;
 
@@ -716,7 +696,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
 	}
 
 	for (sop = sops; sop < sops + nsops; sop++) {
-		curr = sma->sem_base + sop->sem_num;
+		curr = &sma->sems[sop->sem_num];
 		sem_op = sop->sem_op;
 		result = curr->semval;
 
@@ -815,7 +795,7 @@ static int wake_const_ops(struct sem_array *sma, int semnum,
 	if (semnum == -1)
 		pending_list = &sma->pending_const;
 	else
-		pending_list = &sma->sem_base[semnum].pending_const;
+		pending_list = &sma->sems[semnum].pending_const;
 
 	list_for_each_entry_safe(q, tmp, pending_list, list) {
 		int error = perform_atomic_semop(sma, q);
@@ -856,7 +836,7 @@ static int do_smart_wakeup_zero(struct sem_array *sma, struct sembuf *sops,
 		for (i = 0; i < nsops; i++) {
 			int num = sops[i].sem_num;
 
-			if (sma->sem_base[num].semval == 0) {
+			if (sma->sems[num].semval == 0) {
 				got_zero = 1;
 				semop_completed |= wake_const_ops(sma, num, wake_q);
 			}
@@ -867,7 +847,7 @@ static int do_smart_wakeup_zero(struct sem_array *sma, struct sembuf *sops,
 		 * Assume all were changed.
 		 */
 		for (i = 0; i < sma->sem_nsems; i++) {
-			if (sma->sem_base[i].semval == 0) {
+			if (sma->sems[i].semval == 0) {
 				got_zero = 1;
 				semop_completed |= wake_const_ops(sma, i, wake_q);
 			}
@@ -909,7 +889,7 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
 	if (semnum == -1)
 		pending_list = &sma->pending_alter;
 	else
-		pending_list = &sma->sem_base[semnum].pending_alter;
+		pending_list = &sma->sems[semnum].pending_alter;
 
 again:
 	list_for_each_entry_safe(q, tmp, pending_list, list) {
@@ -922,7 +902,7 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
 		 * be in the  per semaphore pending queue, and decrements
 		 * cannot be successful if the value is already 0.
 		 */
-		if (semnum != -1 && sma->sem_base[semnum].semval == 0)
+		if (semnum != -1 && sma->sems[semnum].semval == 0)
 			break;
 
 		error = perform_atomic_semop(sma, q);
@@ -959,9 +939,9 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
 static void set_semotime(struct sem_array *sma, struct sembuf *sops)
 {
 	if (sops == NULL) {
-		sma->sem_base[0].sem_otime = get_seconds();
+		sma->sems[0].sem_otime = get_seconds();
 	} else {
-		sma->sem_base[sops[0].sem_num].sem_otime =
+		sma->sems[sops[0].sem_num].sem_otime =
 							get_seconds();
 	}
 }
@@ -1067,9 +1047,9 @@ static int count_semcnt(struct sem_array *sma, ushort semnum,
 	semcnt = 0;
 	/* First: check the simple operations. They are easy to evaluate */
 	if (count_zero)
-		l = &sma->sem_base[semnum].pending_const;
+		l = &sma->sems[semnum].pending_const;
 	else
-		l = &sma->sem_base[semnum].pending_alter;
+		l = &sma->sems[semnum].pending_alter;
 
 	list_for_each_entry(q, l, list) {
 		/* all task on a per-semaphore list sleep on exactly
@@ -1124,7 +1104,7 @@ static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
 		wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
 	}
 	for (i = 0; i < sma->sem_nsems; i++) {
-		struct sem *sem = sma->sem_base + i;
+		struct sem *sem = &sma->sems[i];
 		list_for_each_entry_safe(q, tq, &sem->pending_const, list) {
 			unlink_queue(sma, q);
 			wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
@@ -1174,9 +1154,9 @@ static time_t get_semotime(struct sem_array *sma)
 	int i;
 	time_t res;
 
-	res = sma->sem_base[0].sem_otime;
+	res = sma->sems[0].sem_otime;
 	for (i = 1; i < sma->sem_nsems; i++) {
-		time_t to = sma->sem_base[i].sem_otime;
+		time_t to = sma->sems[i].sem_otime;
 
 		if (to > res)
 			res = to;
@@ -1325,7 +1305,7 @@ static int semctl_setval(struct ipc_namespace *ns, int semid, int semnum,
 		return -EIDRM;
 	}
 
-	curr = &sma->sem_base[semnum];
+	curr = &sma->sems[semnum];
 
 	ipc_assert_locked_object(&sma->sem_perm);
 	list_for_each_entry(un, &sma->list_id, list_id)
@@ -1402,7 +1382,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 			}
 		}
 		for (i = 0; i < sma->sem_nsems; i++)
-			sem_io[i] = sma->sem_base[i].semval;
+			sem_io[i] = sma->sems[i].semval;
 		sem_unlock(sma, -1);
 		rcu_read_unlock();
 		err = 0;
@@ -1450,8 +1430,8 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		}
 
 		for (i = 0; i < nsems; i++) {
-			sma->sem_base[i].semval = sem_io[i];
-			sma->sem_base[i].sempid = task_tgid_vnr(current);
+			sma->sems[i].semval = sem_io[i];
+			sma->sems[i].sempid = task_tgid_vnr(current);
 		}
 
 		ipc_assert_locked_object(&sma->sem_perm);
@@ -1476,7 +1456,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		err = -EIDRM;
 		goto out_unlock;
 	}
-	curr = &sma->sem_base[semnum];
+	curr = &sma->sems[semnum];
 
 	switch (cmd) {
 	case GETVAL:
@@ -1932,7 +1912,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
 	 */
 	if (nsops == 1) {
 		struct sem *curr;
-		curr = &sma->sem_base[sops->sem_num];
+		curr = &sma->sems[sops->sem_num];
 
 		if (alter) {
 			if (sma->complex_count) {
@@ -2146,7 +2126,7 @@ void exit_sem(struct task_struct *tsk)
 
 		/* perform adjustments registered in un */
 		for (i = 0; i < sma->sem_nsems; i++) {
-			struct sem *semaphore = &sma->sem_base[i];
+			struct sem *semaphore = &sma->sems[i];
 			if (un->semadj[i]) {
 				semaphore->semval += un->semadj[i];
 				/*
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 2/3] ipc: merge ipc_rcu and kern_ipc_perm
  2017-05-15 17:19 ` [PATCH 0/2] Misc cleanups for ipc Manfred Spraul
  2017-05-15 17:19   ` [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
@ 2017-05-15 17:19   ` Manfred Spraul
  2017-05-16  0:03     ` Kees Cook
  2017-05-15 17:19   ` [PATCH 3/3] include/linux/sem.h: Correctly document sem_ctime Manfred Spraul
  2 siblings, 1 reply; 38+ messages in thread
From: Manfred Spraul @ 2017-05-15 17:19 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

ipc has two management structures that exist for every id:
- struct kern_ipc_perm, it contains e.g. the permissions.
- struct ipc_rcu, it contains the rcu head for rcu handling and
  the refcount.

The patch merges both structures.
As a bonus, we may save one cacheline, because both structures are
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 include/linux/ipc.h |  3 +++
 ipc/msg.c           | 22 ++++++++++++++--------
 ipc/sem.c           | 35 ++++++++++++++++++++---------------
 ipc/shm.c           | 21 ++++++++++++++-------
 ipc/util.c          | 33 +++++++++++++++------------------
 ipc/util.h          | 18 +++++++-----------
 6 files changed, 73 insertions(+), 59 deletions(-)

diff --git a/include/linux/ipc.h b/include/linux/ipc.h
index 71fd92d..5591f055 100644
--- a/include/linux/ipc.h
+++ b/include/linux/ipc.h
@@ -20,6 +20,9 @@ struct kern_ipc_perm {
 	umode_t		mode;
 	unsigned long	seq;
 	void		*security;
+
+	struct rcu_head rcu;
+	atomic_t refcount;
 } ____cacheline_aligned_in_smp;
 
 #endif /* _LINUX_IPC_H */
diff --git a/ipc/msg.c b/ipc/msg.c
index 104926d..e9785c4 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -97,8 +97,8 @@ static inline void msg_rmid(struct ipc_namespace *ns, struct msg_queue *s)
 
 static void msg_rcu_free(struct rcu_head *head)
 {
-	struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
-	struct msg_queue *msq = ipc_rcu_to_struct(p);
+	struct kern_ipc_perm *p = container_of(head, struct kern_ipc_perm, rcu);
+	struct msg_queue *msq = container_of(p, struct msg_queue, q_perm);
 
 	security_msg_queue_free(msq);
 	ipc_rcu_free(head);
@@ -118,7 +118,13 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	key_t key = params->key;
 	int msgflg = params->flg;
 
-	msq = ipc_rcu_alloc(sizeof(*msq));
+	if (offsetof(struct msg_queue, q_perm) != 0) {
+		pr_err("Invalid struct sem_perm, failing msgget().\n");
+		return -ENOMEM;
+	}
+
+	msq = container_of(ipc_rcu_alloc(sizeof(*msq)), struct msg_queue,
+				q_perm);
 	if (!msq)
 		return -ENOMEM;
 
@@ -128,7 +134,7 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	msq->q_perm.security = NULL;
 	retval = security_msg_queue_alloc(msq);
 	if (retval) {
-		ipc_rcu_putref(msq, ipc_rcu_free);
+		ipc_rcu_putref(&msq->q_perm, ipc_rcu_free);
 		return retval;
 	}
 
@@ -144,7 +150,7 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	/* ipc_addid() locks msq upon success. */
 	id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
 	if (id < 0) {
-		ipc_rcu_putref(msq, msg_rcu_free);
+		ipc_rcu_putref(&msq->q_perm, msg_rcu_free);
 		return id;
 	}
 
@@ -249,7 +255,7 @@ static void freeque(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
 		free_msg(msg);
 	}
 	atomic_sub(msq->q_cbytes, &ns->msg_bytes);
-	ipc_rcu_putref(msq, msg_rcu_free);
+	ipc_rcu_putref(&msq->q_perm, msg_rcu_free);
 }
 
 /*
@@ -688,7 +694,7 @@ long do_msgsnd(int msqid, long mtype, void __user *mtext,
 		/* enqueue the sender and prepare to block */
 		ss_add(msq, &s, msgsz);
 
-		if (!ipc_rcu_getref(msq)) {
+		if (!ipc_rcu_getref(&msq->q_perm)) {
 			err = -EIDRM;
 			goto out_unlock0;
 		}
@@ -700,7 +706,7 @@ long do_msgsnd(int msqid, long mtype, void __user *mtext,
 		rcu_read_lock();
 		ipc_lock_object(&msq->q_perm);
 
-		ipc_rcu_putref(msq, msg_rcu_free);
+		ipc_rcu_putref(&msq->q_perm, msg_rcu_free);
 		/* raced with RMID? */
 		if (!ipc_valid_object(&msq->q_perm)) {
 			err = -EIDRM;
diff --git a/ipc/sem.c b/ipc/sem.c
index fff8337..18241c1 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -260,8 +260,8 @@ static void merge_queues(struct sem_array *sma)
 
 static void sem_rcu_free(struct rcu_head *head)
 {
-	struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
-	struct sem_array *sma = ipc_rcu_to_struct(p);
+	struct kern_ipc_perm *p = container_of(head, struct kern_ipc_perm, rcu);
+	struct sem_array *sma = container_of(p, struct sem_array, sem_perm);
 
 	security_sem_free(sma);
 	ipc_rcu_free(head);
@@ -438,7 +438,7 @@ static inline struct sem_array *sem_obtain_object_check(struct ipc_namespace *ns
 static inline void sem_lock_and_putref(struct sem_array *sma)
 {
 	sem_lock(sma, NULL, -1);
-	ipc_rcu_putref(sma, sem_rcu_free);
+	ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 }
 
 static inline void sem_rmid(struct ipc_namespace *ns, struct sem_array *s)
@@ -469,8 +469,13 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	if (ns->used_sems + nsems > ns->sc_semmns)
 		return -ENOSPC;
 
+	if (offsetof(struct sem_array, sem_perm) != 0) {
+		pr_err("Invalid struct sem_perm, failing semget().\n");
+		return -ENOMEM;
+	}
+
 	size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
-	sma = ipc_rcu_alloc(size);
+	sma = container_of(ipc_rcu_alloc(size), struct sem_array, sem_perm);
 	if (!sma)
 		return -ENOMEM;
 
@@ -482,7 +487,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	sma->sem_perm.security = NULL;
 	retval = security_sem_alloc(sma);
 	if (retval) {
-		ipc_rcu_putref(sma, ipc_rcu_free);
+		ipc_rcu_putref(&sma->sem_perm, ipc_rcu_free);
 		return retval;
 	}
 
@@ -502,7 +507,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 
 	id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni);
 	if (id < 0) {
-		ipc_rcu_putref(sma, sem_rcu_free);
+		ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 		return id;
 	}
 	ns->used_sems += nsems;
@@ -1122,7 +1127,7 @@ static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
 
 	wake_up_q(&wake_q);
 	ns->used_sems -= sma->sem_nsems;
-	ipc_rcu_putref(sma, sem_rcu_free);
+	ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 }
 
 static unsigned long copy_semid_to_user(void __user *buf, struct semid64_ds *in, int version)
@@ -1362,7 +1367,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 			goto out_unlock;
 		}
 		if (nsems > SEMMSL_FAST) {
-			if (!ipc_rcu_getref(sma)) {
+			if (!ipc_rcu_getref(&sma->sem_perm)) {
 				err = -EIDRM;
 				goto out_unlock;
 			}
@@ -1370,7 +1375,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 			rcu_read_unlock();
 			sem_io = ipc_alloc(sizeof(ushort)*nsems);
 			if (sem_io == NULL) {
-				ipc_rcu_putref(sma, sem_rcu_free);
+				ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 				return -ENOMEM;
 			}
 
@@ -1395,7 +1400,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		int i;
 		struct sem_undo *un;
 
-		if (!ipc_rcu_getref(sma)) {
+		if (!ipc_rcu_getref(&sma->sem_perm)) {
 			err = -EIDRM;
 			goto out_rcu_wakeup;
 		}
@@ -1404,20 +1409,20 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		if (nsems > SEMMSL_FAST) {
 			sem_io = ipc_alloc(sizeof(ushort)*nsems);
 			if (sem_io == NULL) {
-				ipc_rcu_putref(sma, sem_rcu_free);
+				ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 				return -ENOMEM;
 			}
 		}
 
 		if (copy_from_user(sem_io, p, nsems*sizeof(ushort))) {
-			ipc_rcu_putref(sma, sem_rcu_free);
+			ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 			err = -EFAULT;
 			goto out_free;
 		}
 
 		for (i = 0; i < nsems; i++) {
 			if (sem_io[i] > SEMVMX) {
-				ipc_rcu_putref(sma, sem_rcu_free);
+				ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 				err = -ERANGE;
 				goto out_free;
 			}
@@ -1699,7 +1704,7 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
 	}
 
 	nsems = sma->sem_nsems;
-	if (!ipc_rcu_getref(sma)) {
+	if (!ipc_rcu_getref(&sma->sem_perm)) {
 		rcu_read_unlock();
 		un = ERR_PTR(-EIDRM);
 		goto out;
@@ -1709,7 +1714,7 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
 	/* step 2: allocate new undo structure */
 	new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems, GFP_KERNEL);
 	if (!new) {
-		ipc_rcu_putref(sma, sem_rcu_free);
+		ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 		return ERR_PTR(-ENOMEM);
 	}
 
diff --git a/ipc/shm.c b/ipc/shm.c
index 34c4344..cec6df1 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -174,9 +174,10 @@ static inline void shm_lock_by_ptr(struct shmid_kernel *ipcp)
 
 static void shm_rcu_free(struct rcu_head *head)
 {
-	struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
-	struct shmid_kernel *shp = ipc_rcu_to_struct(p);
-
+	struct kern_ipc_perm *ptr = container_of(head, struct kern_ipc_perm,
+							rcu);
+	struct shmid_kernel *shp = container_of(ptr, struct shmid_kernel,
+							shm_perm);
 	security_shm_free(shp);
 	ipc_rcu_free(head);
 }
@@ -241,7 +242,7 @@ static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
 		user_shm_unlock(i_size_read(file_inode(shm_file)),
 				shp->mlock_user);
 	fput(shm_file);
-	ipc_rcu_putref(shp, shm_rcu_free);
+	ipc_rcu_putref(&shp->shm_perm, shm_rcu_free);
 }
 
 /*
@@ -542,7 +543,13 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 			ns->shm_tot + numpages > ns->shm_ctlall)
 		return -ENOSPC;
 
-	shp = ipc_rcu_alloc(sizeof(*shp));
+	if (offsetof(struct shmid_kernel, shm_perm) != 0) {
+		pr_err("Invalid struct sem_perm, failing msgget().\n");
+		return -ENOMEM;
+	}
+
+	shp = container_of(ipc_rcu_alloc(sizeof(*shp)), struct shmid_kernel,
+				shm_perm);
 	if (!shp)
 		return -ENOMEM;
 
@@ -553,7 +560,7 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 	shp->shm_perm.security = NULL;
 	error = security_shm_alloc(shp);
 	if (error) {
-		ipc_rcu_putref(shp, ipc_rcu_free);
+		ipc_rcu_putref(&shp->shm_perm, ipc_rcu_free);
 		return error;
 	}
 
@@ -624,7 +631,7 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 		user_shm_unlock(size, shp->mlock_user);
 	fput(file);
 no_file:
-	ipc_rcu_putref(shp, shm_rcu_free);
+	ipc_rcu_putref(&shp->shm_perm, shm_rcu_free);
 	return error;
 }
 
diff --git a/ipc/util.c b/ipc/util.c
index caec7b1..9dcc08b 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -418,46 +418,43 @@ void ipc_free(void *ptr)
 }
 
 /**
- * ipc_rcu_alloc - allocate ipc and rcu space
+ * ipc_rcu_alloc - allocate ipc space
  * @size: size desired
  *
- * Allocate memory for the rcu header structure +  the object.
- * Returns the pointer to the object or NULL upon failure.
+ * Allocate memory for an ipc object.
+ * The first member must be struct kern_ipc_perm.
  */
-void *ipc_rcu_alloc(int size)
+struct kern_ipc_perm *ipc_rcu_alloc(int size)
 {
 	/*
 	 * We prepend the allocation with the rcu struct
 	 */
-	struct ipc_rcu *out = ipc_alloc(sizeof(struct ipc_rcu) + size);
+	struct kern_ipc_perm *out = ipc_alloc(size);
 	if (unlikely(!out))
 		return NULL;
 	atomic_set(&out->refcount, 1);
-	return out + 1;
+	return out;
 }
 
-int ipc_rcu_getref(void *ptr)
+int ipc_rcu_getref(struct kern_ipc_perm *ptr)
 {
-	struct ipc_rcu *p = ((struct ipc_rcu *)ptr) - 1;
-
-	return atomic_inc_not_zero(&p->refcount);
+	return atomic_inc_not_zero(&ptr->refcount);
 }
 
-void ipc_rcu_putref(void *ptr, void (*func)(struct rcu_head *head))
+void ipc_rcu_putref(struct kern_ipc_perm *ptr,
+			void (*func)(struct rcu_head *head))
 {
-	struct ipc_rcu *p = ((struct ipc_rcu *)ptr) - 1;
-
-	if (!atomic_dec_and_test(&p->refcount))
+	if (!atomic_dec_and_test(&ptr->refcount))
 		return;
 
-	call_rcu(&p->rcu, func);
+	call_rcu(&ptr->rcu, func);
 }
 
-void ipc_rcu_free(struct rcu_head *head)
+void ipc_rcu_free(struct rcu_head *h)
 {
-	struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
+	struct kern_ipc_perm *ptr = container_of(h, struct kern_ipc_perm, rcu);
 
-	kvfree(p);
+	kvfree(ptr);
 }
 
 /**
diff --git a/ipc/util.h b/ipc/util.h
index 60ddccc..09d0f91 100644
--- a/ipc/util.h
+++ b/ipc/util.h
@@ -47,13 +47,6 @@ static inline void msg_exit_ns(struct ipc_namespace *ns) { }
 static inline void shm_exit_ns(struct ipc_namespace *ns) { }
 #endif
 
-struct ipc_rcu {
-	struct rcu_head rcu;
-	atomic_t refcount;
-} ____cacheline_aligned_in_smp;
-
-#define ipc_rcu_to_struct(p)  ((void *)(p+1))
-
 /*
  * Structure that holds the parameters needed by the ipc operations
  * (see after)
@@ -125,11 +118,14 @@ void ipc_free(void *ptr);
  * Objects are reference counted, they start with reference count 1.
  * getref increases the refcount, the putref call that reduces the recount
  * to 0 schedules the rcu destruction. Caller must guarantee locking.
+ *
+ * struct kern_ipc_perm must be the first member in the allocated structure.
  */
-void *ipc_rcu_alloc(int size);
-int ipc_rcu_getref(void *ptr);
-void ipc_rcu_putref(void *ptr, void (*func)(struct rcu_head *head));
-void ipc_rcu_free(struct rcu_head *head);
+struct kern_ipc_perm *ipc_rcu_alloc(int size);
+int ipc_rcu_getref(struct kern_ipc_perm *ptr);
+void ipc_rcu_putref(struct kern_ipc_perm *ptr,
+			void (*func)(struct rcu_head *head));
+void ipc_rcu_free(struct rcu_head *h);
 
 struct kern_ipc_perm *ipc_lock(struct ipc_ids *, int);
 struct kern_ipc_perm *ipc_obtain_object_idr(struct ipc_ids *ids, int id);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 3/3] include/linux/sem.h: Correctly document sem_ctime
  2017-05-15 17:19 ` [PATCH 0/2] Misc cleanups for ipc Manfred Spraul
  2017-05-15 17:19   ` [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
  2017-05-15 17:19   ` [PATCH 2/3] ipc: merge ipc_rcu and kern_ipc_perm Manfred Spraul
@ 2017-05-15 17:19   ` Manfred Spraul
  2 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-15 17:19 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf,
	Manfred Spraul, linux-man

sem_ctime is initialized to the semget() time and then updated at every
semctl() that changes the array.
Thus it does not represent the time of the last change.

Especially, semop() calls are only stored in sem_otime, not in sem_ctime.

This is already described in ipc/sem.c, I just overlooked that there is
a comment in include/linux/sem.h and man semctl(2) as well.

So: Correct wrong comments.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: linux-man@vger.kernel.org
---
 include/linux/sem.h      | 2 +-
 include/uapi/linux/sem.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/sem.h b/include/linux/sem.h
index c2218a4..fc7af8e 100644
--- a/include/linux/sem.h
+++ b/include/linux/sem.h
@@ -30,7 +30,7 @@ struct sem {
 /* One sem_array data structure for each set of semaphores in the system. */
 struct sem_array {
 	struct kern_ipc_perm	sem_perm;	/* permissions .. see ipc.h */
-	time_t			sem_ctime;	/* last change time */
+	time_t			sem_ctime;	/* create/last semctl() time */
 	struct list_head	pending_alter;	/* pending operations */
 						/* that alter the array */
 	struct list_head	pending_const;	/* pending complex operations */
diff --git a/include/uapi/linux/sem.h b/include/uapi/linux/sem.h
index dd73b90..67eb903 100644
--- a/include/uapi/linux/sem.h
+++ b/include/uapi/linux/sem.h
@@ -23,7 +23,7 @@
 struct semid_ds {
 	struct ipc_perm	sem_perm;		/* permissions .. see ipc.h */
 	__kernel_time_t	sem_otime;		/* last semop time */
-	__kernel_time_t	sem_ctime;		/* last change time */
+	__kernel_time_t	sem_ctime;		/* create/last semctl() time */
 	struct sem	*sem_base;		/* ptr to first semaphore in array */
 	struct sem_queue *sem_pending;		/* pending operations to be processed */
 	struct sem_queue **sem_pending_last;	/* last pending operation */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH] ipc/sem: Avoid indexing past end of sem_array
  2017-05-14 13:54 ` Manfred Spraul
@ 2017-05-15 17:40   ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2017-05-15 17:40 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: Andrew Morton, Davidlohr Bueso, Ingo Molnar, Peter Zijlstra,
	Fabian Frederick, LKML

On Sun, May 14, 2017 at 6:54 AM, Manfred Spraul
<manfred@colorfullife.com> wrote:
> Hi Kees,
>
> On 05/09/2017 12:23 AM, Kees Cook wrote:
>>
>> This changes the struct + trailing data pattern to using a void * so that
>> the end of sem_array is found without possibly indexing past the end which
>> can upset some static analyzers. Mostly, this ends up avoiding a cast
>> between different non-void types, which the future randstruct GCC plugin
>> was warning about.
>
> Two question:
> - Would the attached patch work with the randstruct plugin as well?
>   If we touch the code, then I would propose that we remove sem_base
> entirely.

I'll double check with your series, but I think your change makes
sense regardless (since it makes it very clear that there are
allocated sems after the struct due to the [0] entry).

>
> - ipc/util.h contains
>
>> #define ipc_rcu_to_struct(p)  ((void *)(p+1))
>
> Does this trigger a warning with randstruct as well?
> If we have to touch it, then I would remove it by merging struct
> kern_ipc_perm and struct ipc_rcu.
>
> And, obviously:
> Do you see any issues with the attached patch?

I'll test your series with the randstruct series and see what falls out. :)

Thanks!

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem
  2017-05-15 17:19   ` [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
@ 2017-05-15 20:08     ` Andrew Morton
  2017-05-15 22:16       ` Kees Cook
  2017-05-16  5:46     ` Christoph Hellwig
  1 sibling, 1 reply; 38+ messages in thread
From: Andrew Morton @ 2017-05-15 20:08 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: mtk.manpages, Kees Cook, LKML, 1vier1, Davidlohr Bueso, mingo,
	peterz, fabf

On Mon, 15 May 2017 19:19:10 +0200 Manfred Spraul <manfred@colorfullife.com> wrote:

> sma->sem_base is initialized with
> 	sma->sem_base = (struct sem *) &sma[1];
> 
> The current code has four problems:
> - There is an unnecessary pointer dereference - sem_base is not needed.
> - Alignment for struct sem only works by chance.
> - The current code causes false positive for static code analysis.
> - This is a cast between different non-void types, which the future
>   randstruct GCC plugin warns on.
> 
> And, as bonus, the code size gets smaller:
> 
> Before:
>   0 .text         00003770
> After:
>   0 .text         0000374e


This clashes with Kees's patch, below.  Does it have the same effect?


From: Kees Cook <keescook@chromium.org>
Subject: ipc/sem: avoid indexing past end of sem_array

This changes the struct + trailing data pattern to using a void * so that
the end of sem_array is found without possibly indexing past the end which
can upset some static analyzers.  Mostly, this ends up avoiding a cast
between different non-void types, which the future randstruct GCC plugin
was warning about.

Link: http://lkml.kernel.org/r/20170508222345.GA52073@beast
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 ipc/sem.c |   12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff -puN ipc/sem.c~ipc-sem-avoid-indexing-past-end-of-sem_array ipc/sem.c
--- a/ipc/sem.c~ipc-sem-avoid-indexing-past-end-of-sem_array
+++ a/ipc/sem.c
@@ -475,6 +475,7 @@ static int newary(struct ipc_namespace *
 {
 	int id;
 	int retval;
+	void *sem_alloc;
 	struct sem_array *sma;
 	int size;
 	key_t key = params->key;
@@ -488,11 +489,14 @@ static int newary(struct ipc_namespace *
 		return -ENOSPC;
 
 	size = sizeof(*sma) + nsems * sizeof(struct sem);
-	sma = ipc_rcu_alloc(size);
-	if (!sma)
+	sem_alloc = ipc_rcu_alloc(size);
+	if (!sem_alloc)
 		return -ENOMEM;
 
-	memset(sma, 0, size);
+	memset(sem_alloc, 0, size);
+
+	sma = sem_alloc;
+	sma->sem_base = sem_alloc + sizeof(*sma);
 
 	sma->sem_perm.mode = (semflg & S_IRWXUGO);
 	sma->sem_perm.key = key;
@@ -504,8 +508,6 @@ static int newary(struct ipc_namespace *
 		return retval;
 	}
 
-	sma->sem_base = (struct sem *) &sma[1];
-
 	for (i = 0; i < nsems; i++) {
 		INIT_LIST_HEAD(&sma->sem_base[i].pending_alter);
 		INIT_LIST_HEAD(&sma->sem_base[i].pending_const);
_

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem
  2017-05-15 20:08     ` Andrew Morton
@ 2017-05-15 22:16       ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2017-05-15 22:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Manfred Spraul, Michael Kerrisk, LKML, 1vier1, Davidlohr Bueso,
	Ingo Molnar, Peter Zijlstra, Fabian Frederick

On Mon, May 15, 2017 at 1:08 PM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Mon, 15 May 2017 19:19:10 +0200 Manfred Spraul <manfred@colorfullife.com> wrote:
>
>> sma->sem_base is initialized with
>>       sma->sem_base = (struct sem *) &sma[1];
>>
>> The current code has four problems:
>> - There is an unnecessary pointer dereference - sem_base is not needed.
>> - Alignment for struct sem only works by chance.
>> - The current code causes false positive for static code analysis.
>> - This is a cast between different non-void types, which the future
>>   randstruct GCC plugin warns on.
>>
>> And, as bonus, the code size gets smaller:
>>
>> Before:
>>   0 .text         00003770
>> After:
>>   0 .text         0000374e
>
>
> This clashes with Kees's patch, below.  Does it have the same effect?

This is a better clean up than what I've got. I haven't had a chance
to verify this is sufficient for randstruct (I think it is), but I'll
check and send any another needed fixes separately.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/3] ipc: merge ipc_rcu and kern_ipc_perm
  2017-05-15 17:19   ` [PATCH 2/3] ipc: merge ipc_rcu and kern_ipc_perm Manfred Spraul
@ 2017-05-16  0:03     ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2017-05-16  0:03 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: Michael Kerrisk, Andrew Morton, LKML, 1vier1, Davidlohr Bueso,
	Ingo Molnar, Peter Zijlstra, Fabian Frederick

On Mon, May 15, 2017 at 10:19 AM, Manfred Spraul
<manfred@colorfullife.com> wrote:
> ipc has two management structures that exist for every id:
> - struct kern_ipc_perm, it contains e.g. the permissions.
> - struct ipc_rcu, it contains the rcu head for rcu handling and
>   the refcount.
>
> The patch merges both structures.
> As a bonus, we may save one cacheline, because both structures are
> cacheline aligned.
> In addition, it reduces the number of casts, instead most codepaths can
> use container_of.
>
> Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
> ---
>  include/linux/ipc.h |  3 +++
>  ipc/msg.c           | 22 ++++++++++++++--------
>  ipc/sem.c           | 35 ++++++++++++++++++++---------------
>  ipc/shm.c           | 21 ++++++++++++++-------
>  ipc/util.c          | 33 +++++++++++++++------------------
>  ipc/util.h          | 18 +++++++-----------
>  6 files changed, 73 insertions(+), 59 deletions(-)
>
> diff --git a/include/linux/ipc.h b/include/linux/ipc.h
> index 71fd92d..5591f055 100644
> --- a/include/linux/ipc.h
> +++ b/include/linux/ipc.h
> @@ -20,6 +20,9 @@ struct kern_ipc_perm {
>         umode_t         mode;
>         unsigned long   seq;
>         void            *security;
> +
> +       struct rcu_head rcu;
> +       atomic_t refcount;
>  } ____cacheline_aligned_in_smp;
>
>  #endif /* _LINUX_IPC_H */
> diff --git a/ipc/msg.c b/ipc/msg.c
> index 104926d..e9785c4 100644
> --- a/ipc/msg.c
> +++ b/ipc/msg.c
> @@ -97,8 +97,8 @@ static inline void msg_rmid(struct ipc_namespace *ns, struct msg_queue *s)
>
>  static void msg_rcu_free(struct rcu_head *head)
>  {
> -       struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
> -       struct msg_queue *msq = ipc_rcu_to_struct(p);
> +       struct kern_ipc_perm *p = container_of(head, struct kern_ipc_perm, rcu);
> +       struct msg_queue *msq = container_of(p, struct msg_queue, q_perm);
>
>         security_msg_queue_free(msq);
>         ipc_rcu_free(head);
> @@ -118,7 +118,13 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
>         key_t key = params->key;
>         int msgflg = params->flg;
>
> -       msq = ipc_rcu_alloc(sizeof(*msq));
> +       if (offsetof(struct msg_queue, q_perm) != 0) {
> +               pr_err("Invalid struct sem_perm, failing msgget().\n");
> +               return -ENOMEM;
> +       }

This is a compile-time check that will result in a runtime warning.
This should be a BUILD_BUG_ON() instead. It means we can't randomize
the first element of these structures, but I'll deal with that in the
randstruct plugin (or send another fix when I dig through it).

It looks like the "as first element" is needed to share the alloc/put
routines. To randomize the position of kern_ipc_perm within each
structure means having different accessor functions, which makes this
less fun. :P

Anyway, regardless, these patches make the sem_base thing go away and
consolidates other stuff which looks good to me.

Thanks!

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem
  2017-05-15 17:19   ` [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
  2017-05-15 20:08     ` Andrew Morton
@ 2017-05-16  5:46     ` Christoph Hellwig
  1 sibling, 0 replies; 38+ messages in thread
From: Christoph Hellwig @ 2017-05-16  5:46 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: mtk.manpages, Andrew Morton, Kees Cook, LKML, 1vier1,
	Davidlohr Bueso, mingo, peterz, fabf

> +	struct sem		sems[0];

I can't contribute much to the code itself, but [0] is an old
GCC extension, for modern code sems[] should do the same things
but is standard C.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH 0/20 V3] Misc cleanups for ipc
  2017-05-08 22:23 [PATCH] ipc/sem: Avoid indexing past end of sem_array Kees Cook
  2017-05-14 13:54 ` Manfred Spraul
  2017-05-15 17:19 ` [PATCH 0/2] Misc cleanups for ipc Manfred Spraul
@ 2017-05-25 18:50 ` Manfred Spraul
  2017-05-25 18:50   ` [PATCH 01/20] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
                     ` (20 more replies)
  2 siblings, 21 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

Hi all,

Updated series. The series got longer, because I merged all patches
from Kees.

Main changes are:
- sems[] instead of sem[0].
- Immediately use BUILD_BUG_ON()
- Immediately move the memset() to avoid crashing with SEM_UNDO.
- Use rcu for every security_xx_free(), even if ipc_addid() was not 
  successful

@Andrew: Could you add them again to your tree?

@Michael: 
Should we update man semctl(2)?
Several years ago, I did a review and found that sem_ctime is only
for Coherent the time of the last change...

http://calculix-rpm.sourceforge.net/sysvsem.html

--
        Manfred

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH 01/20] ipc/sem.c: remove sem_base, embed struct sem
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 19:43     ` Kees Cook
  2017-05-25 18:50   ` [PATCH 02/20] ipc: merge ipc_rcu and kern_ipc_perm Manfred Spraul
                     ` (19 subsequent siblings)
  20 siblings, 1 reply; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

sma->sem_base is initialized with
	sma->sem_base = (struct sem *) &sma[1];

The current code has four problems:
- There is an unnecessary pointer dereference - sem_base is not needed.
- Alignment for struct sem only works by chance.
- The current code causes false positive for static code analysis.
- This is a cast between different non-void types, which the future
  randstruct GCC plugin warns on.

And, as bonus, the code size gets smaller:

Before:
  0 .text         00003770
After:
  0 .text         0000374e

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 include/linux/sem.h | 22 +++++++++++++-
 ipc/sem.c           | 88 +++++++++++++++++++++--------------------------------
 2 files changed, 55 insertions(+), 55 deletions(-)

diff --git a/include/linux/sem.h b/include/linux/sem.h
index 9edec92..9db1409 100644
--- a/include/linux/sem.h
+++ b/include/linux/sem.h
@@ -8,11 +8,29 @@
 
 struct task_struct;
 
+/* One semaphore structure for each semaphore in the system. */
+struct sem {
+	int	semval;		/* current value */
+	/*
+	 * PID of the process that last modified the semaphore. For
+	 * Linux, specifically these are:
+	 *  - semop
+	 *  - semctl, via SETVAL and SETALL.
+	 *  - at task exit when performing undo adjustments (see exit_sem).
+	 */
+	int	sempid;
+	spinlock_t	lock;	/* spinlock for fine-grained semtimedop */
+	struct list_head pending_alter; /* pending single-sop operations */
+					/* that alter the semaphore */
+	struct list_head pending_const; /* pending single-sop operations */
+					/* that do not alter the semaphore*/
+	time_t	sem_otime;	/* candidate for sem_otime */
+} ____cacheline_aligned_in_smp;
+
 /* One sem_array data structure for each set of semaphores in the system. */
 struct sem_array {
 	struct kern_ipc_perm	sem_perm;	/* permissions .. see ipc.h */
 	time_t			sem_ctime;	/* last change time */
-	struct sem		*sem_base;	/* ptr to first semaphore in array */
 	struct list_head	pending_alter;	/* pending operations */
 						/* that alter the array */
 	struct list_head	pending_const;	/* pending complex operations */
@@ -21,6 +39,8 @@ struct sem_array {
 	int			sem_nsems;	/* no. of semaphores in array */
 	int			complex_count;	/* pending complex operations */
 	unsigned int		use_global_lock;/* >0: global lock required */
+
+	struct sem		sems[];
 };
 
 #ifdef CONFIG_SYSVIPC
diff --git a/ipc/sem.c b/ipc/sem.c
index 947dc23..fff8337 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -87,24 +87,6 @@
 #include <linux/uaccess.h>
 #include "util.h"
 
-/* One semaphore structure for each semaphore in the system. */
-struct sem {
-	int	semval;		/* current value */
-	/*
-	 * PID of the process that last modified the semaphore. For
-	 * Linux, specifically these are:
-	 *  - semop
-	 *  - semctl, via SETVAL and SETALL.
-	 *  - at task exit when performing undo adjustments (see exit_sem).
-	 */
-	int	sempid;
-	spinlock_t	lock;	/* spinlock for fine-grained semtimedop */
-	struct list_head pending_alter; /* pending single-sop operations */
-					/* that alter the semaphore */
-	struct list_head pending_const; /* pending single-sop operations */
-					/* that do not alter the semaphore*/
-	time_t	sem_otime;	/* candidate for sem_otime */
-} ____cacheline_aligned_in_smp;
 
 /* One queue for each sleeping process in the system. */
 struct sem_queue {
@@ -175,7 +157,7 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
  *	sem_array.sem_undo
  *
  * b) global or semaphore sem_lock() for read/write:
- *	sem_array.sem_base[i].pending_{const,alter}:
+ *	sem_array.sems[i].pending_{const,alter}:
  *
  * c) special:
  *	sem_undo_list.list_proc:
@@ -250,7 +232,7 @@ static void unmerge_queues(struct sem_array *sma)
 	 */
 	list_for_each_entry_safe(q, tq, &sma->pending_alter, list) {
 		struct sem *curr;
-		curr = &sma->sem_base[q->sops[0].sem_num];
+		curr = &sma->sems[q->sops[0].sem_num];
 
 		list_add_tail(&q->list, &curr->pending_alter);
 	}
@@ -270,7 +252,7 @@ static void merge_queues(struct sem_array *sma)
 {
 	int i;
 	for (i = 0; i < sma->sem_nsems; i++) {
-		struct sem *sem = sma->sem_base + i;
+		struct sem *sem = &sma->sems[i];
 
 		list_splice_init(&sem->pending_alter, &sma->pending_alter);
 	}
@@ -306,7 +288,7 @@ static void complexmode_enter(struct sem_array *sma)
 	sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS;
 
 	for (i = 0; i < sma->sem_nsems; i++) {
-		sem = sma->sem_base + i;
+		sem = &sma->sems[i];
 		spin_lock(&sem->lock);
 		spin_unlock(&sem->lock);
 	}
@@ -366,7 +348,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
 	 *
 	 * Both facts are tracked by use_global_mode.
 	 */
-	sem = sma->sem_base + sops->sem_num;
+	sem = &sma->sems[sops->sem_num];
 
 	/*
 	 * Initial check for use_global_lock. Just an optimization,
@@ -421,7 +403,7 @@ static inline void sem_unlock(struct sem_array *sma, int locknum)
 		complexmode_tryleave(sma);
 		ipc_unlock_object(&sma->sem_perm);
 	} else {
-		struct sem *sem = sma->sem_base + locknum;
+		struct sem *sem = &sma->sems[locknum];
 		spin_unlock(&sem->lock);
 	}
 }
@@ -487,7 +469,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	if (ns->used_sems + nsems > ns->sc_semmns)
 		return -ENOSPC;
 
-	size = sizeof(*sma) + nsems * sizeof(struct sem);
+	size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
 	sma = ipc_rcu_alloc(size);
 	if (!sma)
 		return -ENOMEM;
@@ -504,12 +486,10 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 		return retval;
 	}
 
-	sma->sem_base = (struct sem *) &sma[1];
-
 	for (i = 0; i < nsems; i++) {
-		INIT_LIST_HEAD(&sma->sem_base[i].pending_alter);
-		INIT_LIST_HEAD(&sma->sem_base[i].pending_const);
-		spin_lock_init(&sma->sem_base[i].lock);
+		INIT_LIST_HEAD(&sma->sems[i].pending_alter);
+		INIT_LIST_HEAD(&sma->sems[i].pending_const);
+		spin_lock_init(&sma->sems[i].lock);
 	}
 
 	sma->complex_count = 0;
@@ -612,7 +592,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
 	un = q->undo;
 
 	for (sop = sops; sop < sops + nsops; sop++) {
-		curr = sma->sem_base + sop->sem_num;
+		curr = &sma->sems[sop->sem_num];
 		sem_op = sop->sem_op;
 		result = curr->semval;
 
@@ -639,7 +619,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
 	sop--;
 	pid = q->pid;
 	while (sop >= sops) {
-		sma->sem_base[sop->sem_num].sempid = pid;
+		sma->sems[sop->sem_num].sempid = pid;
 		sop--;
 	}
 
@@ -661,7 +641,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
 	sop--;
 	while (sop >= sops) {
 		sem_op = sop->sem_op;
-		sma->sem_base[sop->sem_num].semval -= sem_op;
+		sma->sems[sop->sem_num].semval -= sem_op;
 		if (sop->sem_flg & SEM_UNDO)
 			un->semadj[sop->sem_num] += sem_op;
 		sop--;
@@ -692,7 +672,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
 	 * until the operations can go through.
 	 */
 	for (sop = sops; sop < sops + nsops; sop++) {
-		curr = sma->sem_base + sop->sem_num;
+		curr = &sma->sems[sop->sem_num];
 		sem_op = sop->sem_op;
 		result = curr->semval;
 
@@ -716,7 +696,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
 	}
 
 	for (sop = sops; sop < sops + nsops; sop++) {
-		curr = sma->sem_base + sop->sem_num;
+		curr = &sma->sems[sop->sem_num];
 		sem_op = sop->sem_op;
 		result = curr->semval;
 
@@ -815,7 +795,7 @@ static int wake_const_ops(struct sem_array *sma, int semnum,
 	if (semnum == -1)
 		pending_list = &sma->pending_const;
 	else
-		pending_list = &sma->sem_base[semnum].pending_const;
+		pending_list = &sma->sems[semnum].pending_const;
 
 	list_for_each_entry_safe(q, tmp, pending_list, list) {
 		int error = perform_atomic_semop(sma, q);
@@ -856,7 +836,7 @@ static int do_smart_wakeup_zero(struct sem_array *sma, struct sembuf *sops,
 		for (i = 0; i < nsops; i++) {
 			int num = sops[i].sem_num;
 
-			if (sma->sem_base[num].semval == 0) {
+			if (sma->sems[num].semval == 0) {
 				got_zero = 1;
 				semop_completed |= wake_const_ops(sma, num, wake_q);
 			}
@@ -867,7 +847,7 @@ static int do_smart_wakeup_zero(struct sem_array *sma, struct sembuf *sops,
 		 * Assume all were changed.
 		 */
 		for (i = 0; i < sma->sem_nsems; i++) {
-			if (sma->sem_base[i].semval == 0) {
+			if (sma->sems[i].semval == 0) {
 				got_zero = 1;
 				semop_completed |= wake_const_ops(sma, i, wake_q);
 			}
@@ -909,7 +889,7 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
 	if (semnum == -1)
 		pending_list = &sma->pending_alter;
 	else
-		pending_list = &sma->sem_base[semnum].pending_alter;
+		pending_list = &sma->sems[semnum].pending_alter;
 
 again:
 	list_for_each_entry_safe(q, tmp, pending_list, list) {
@@ -922,7 +902,7 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
 		 * be in the  per semaphore pending queue, and decrements
 		 * cannot be successful if the value is already 0.
 		 */
-		if (semnum != -1 && sma->sem_base[semnum].semval == 0)
+		if (semnum != -1 && sma->sems[semnum].semval == 0)
 			break;
 
 		error = perform_atomic_semop(sma, q);
@@ -959,9 +939,9 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
 static void set_semotime(struct sem_array *sma, struct sembuf *sops)
 {
 	if (sops == NULL) {
-		sma->sem_base[0].sem_otime = get_seconds();
+		sma->sems[0].sem_otime = get_seconds();
 	} else {
-		sma->sem_base[sops[0].sem_num].sem_otime =
+		sma->sems[sops[0].sem_num].sem_otime =
 							get_seconds();
 	}
 }
@@ -1067,9 +1047,9 @@ static int count_semcnt(struct sem_array *sma, ushort semnum,
 	semcnt = 0;
 	/* First: check the simple operations. They are easy to evaluate */
 	if (count_zero)
-		l = &sma->sem_base[semnum].pending_const;
+		l = &sma->sems[semnum].pending_const;
 	else
-		l = &sma->sem_base[semnum].pending_alter;
+		l = &sma->sems[semnum].pending_alter;
 
 	list_for_each_entry(q, l, list) {
 		/* all task on a per-semaphore list sleep on exactly
@@ -1124,7 +1104,7 @@ static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
 		wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
 	}
 	for (i = 0; i < sma->sem_nsems; i++) {
-		struct sem *sem = sma->sem_base + i;
+		struct sem *sem = &sma->sems[i];
 		list_for_each_entry_safe(q, tq, &sem->pending_const, list) {
 			unlink_queue(sma, q);
 			wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
@@ -1174,9 +1154,9 @@ static time_t get_semotime(struct sem_array *sma)
 	int i;
 	time_t res;
 
-	res = sma->sem_base[0].sem_otime;
+	res = sma->sems[0].sem_otime;
 	for (i = 1; i < sma->sem_nsems; i++) {
-		time_t to = sma->sem_base[i].sem_otime;
+		time_t to = sma->sems[i].sem_otime;
 
 		if (to > res)
 			res = to;
@@ -1325,7 +1305,7 @@ static int semctl_setval(struct ipc_namespace *ns, int semid, int semnum,
 		return -EIDRM;
 	}
 
-	curr = &sma->sem_base[semnum];
+	curr = &sma->sems[semnum];
 
 	ipc_assert_locked_object(&sma->sem_perm);
 	list_for_each_entry(un, &sma->list_id, list_id)
@@ -1402,7 +1382,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 			}
 		}
 		for (i = 0; i < sma->sem_nsems; i++)
-			sem_io[i] = sma->sem_base[i].semval;
+			sem_io[i] = sma->sems[i].semval;
 		sem_unlock(sma, -1);
 		rcu_read_unlock();
 		err = 0;
@@ -1450,8 +1430,8 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		}
 
 		for (i = 0; i < nsems; i++) {
-			sma->sem_base[i].semval = sem_io[i];
-			sma->sem_base[i].sempid = task_tgid_vnr(current);
+			sma->sems[i].semval = sem_io[i];
+			sma->sems[i].sempid = task_tgid_vnr(current);
 		}
 
 		ipc_assert_locked_object(&sma->sem_perm);
@@ -1476,7 +1456,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		err = -EIDRM;
 		goto out_unlock;
 	}
-	curr = &sma->sem_base[semnum];
+	curr = &sma->sems[semnum];
 
 	switch (cmd) {
 	case GETVAL:
@@ -1932,7 +1912,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
 	 */
 	if (nsops == 1) {
 		struct sem *curr;
-		curr = &sma->sem_base[sops->sem_num];
+		curr = &sma->sems[sops->sem_num];
 
 		if (alter) {
 			if (sma->complex_count) {
@@ -2146,7 +2126,7 @@ void exit_sem(struct task_struct *tsk)
 
 		/* perform adjustments registered in un */
 		for (i = 0; i < sma->sem_nsems; i++) {
-			struct sem *semaphore = &sma->sem_base[i];
+			struct sem *semaphore = &sma->sems[i];
 			if (un->semadj[i]) {
 				semaphore->semval += un->semadj[i];
 				/*
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 02/20] ipc: merge ipc_rcu and kern_ipc_perm
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
  2017-05-25 18:50   ` [PATCH 01/20] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 19:34     ` Kees Cook
  2017-05-25 18:50   ` [PATCH 03/20] include/linux/sem.h: Correctly document sem_ctime Manfred Spraul
                     ` (18 subsequent siblings)
  20 siblings, 1 reply; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

ipc has two management structures that exist for every id:
- struct kern_ipc_perm, it contains e.g. the permissions.
- struct ipc_rcu, it contains the rcu head for rcu handling and
  the refcount.

The patch merges both structures.
As a bonus, we may save one cacheline, because both structures are
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.

To simplify code, the ipc_rcu_alloc initializes the allocation to 0.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 include/linux/ipc.h |  3 +++
 ipc/msg.c           | 19 +++++++++++--------
 ipc/sem.c           | 34 +++++++++++++++++-----------------
 ipc/shm.c           | 18 +++++++++++-------
 ipc/util.c          | 33 +++++++++++++++------------------
 ipc/util.h          | 18 +++++++-----------
 6 files changed, 64 insertions(+), 61 deletions(-)

diff --git a/include/linux/ipc.h b/include/linux/ipc.h
index 71fd92d..5591f055 100644
--- a/include/linux/ipc.h
+++ b/include/linux/ipc.h
@@ -20,6 +20,9 @@ struct kern_ipc_perm {
 	umode_t		mode;
 	unsigned long	seq;
 	void		*security;
+
+	struct rcu_head rcu;
+	atomic_t refcount;
 } ____cacheline_aligned_in_smp;
 
 #endif /* _LINUX_IPC_H */
diff --git a/ipc/msg.c b/ipc/msg.c
index 104926d..0ed7dae 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -97,8 +97,8 @@ static inline void msg_rmid(struct ipc_namespace *ns, struct msg_queue *s)
 
 static void msg_rcu_free(struct rcu_head *head)
 {
-	struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
-	struct msg_queue *msq = ipc_rcu_to_struct(p);
+	struct kern_ipc_perm *p = container_of(head, struct kern_ipc_perm, rcu);
+	struct msg_queue *msq = container_of(p, struct msg_queue, q_perm);
 
 	security_msg_queue_free(msq);
 	ipc_rcu_free(head);
@@ -118,7 +118,10 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	key_t key = params->key;
 	int msgflg = params->flg;
 
-	msq = ipc_rcu_alloc(sizeof(*msq));
+	BUILD_BUG_ON(offsetof(struct msg_queue, q_perm) != 0);
+
+	msq = container_of(ipc_rcu_alloc(sizeof(*msq)), struct msg_queue,
+				q_perm);
 	if (!msq)
 		return -ENOMEM;
 
@@ -128,7 +131,7 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	msq->q_perm.security = NULL;
 	retval = security_msg_queue_alloc(msq);
 	if (retval) {
-		ipc_rcu_putref(msq, ipc_rcu_free);
+		ipc_rcu_putref(&msq->q_perm, ipc_rcu_free);
 		return retval;
 	}
 
@@ -144,7 +147,7 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	/* ipc_addid() locks msq upon success. */
 	id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
 	if (id < 0) {
-		ipc_rcu_putref(msq, msg_rcu_free);
+		ipc_rcu_putref(&msq->q_perm, msg_rcu_free);
 		return id;
 	}
 
@@ -249,7 +252,7 @@ static void freeque(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
 		free_msg(msg);
 	}
 	atomic_sub(msq->q_cbytes, &ns->msg_bytes);
-	ipc_rcu_putref(msq, msg_rcu_free);
+	ipc_rcu_putref(&msq->q_perm, msg_rcu_free);
 }
 
 /*
@@ -688,7 +691,7 @@ long do_msgsnd(int msqid, long mtype, void __user *mtext,
 		/* enqueue the sender and prepare to block */
 		ss_add(msq, &s, msgsz);
 
-		if (!ipc_rcu_getref(msq)) {
+		if (!ipc_rcu_getref(&msq->q_perm)) {
 			err = -EIDRM;
 			goto out_unlock0;
 		}
@@ -700,7 +703,7 @@ long do_msgsnd(int msqid, long mtype, void __user *mtext,
 		rcu_read_lock();
 		ipc_lock_object(&msq->q_perm);
 
-		ipc_rcu_putref(msq, msg_rcu_free);
+		ipc_rcu_putref(&msq->q_perm, msg_rcu_free);
 		/* raced with RMID? */
 		if (!ipc_valid_object(&msq->q_perm)) {
 			err = -EIDRM;
diff --git a/ipc/sem.c b/ipc/sem.c
index fff8337..bdff6d9 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -260,8 +260,8 @@ static void merge_queues(struct sem_array *sma)
 
 static void sem_rcu_free(struct rcu_head *head)
 {
-	struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
-	struct sem_array *sma = ipc_rcu_to_struct(p);
+	struct kern_ipc_perm *p = container_of(head, struct kern_ipc_perm, rcu);
+	struct sem_array *sma = container_of(p, struct sem_array, sem_perm);
 
 	security_sem_free(sma);
 	ipc_rcu_free(head);
@@ -438,7 +438,7 @@ static inline struct sem_array *sem_obtain_object_check(struct ipc_namespace *ns
 static inline void sem_lock_and_putref(struct sem_array *sma)
 {
 	sem_lock(sma, NULL, -1);
-	ipc_rcu_putref(sma, sem_rcu_free);
+	ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 }
 
 static inline void sem_rmid(struct ipc_namespace *ns, struct sem_array *s)
@@ -469,20 +469,20 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	if (ns->used_sems + nsems > ns->sc_semmns)
 		return -ENOSPC;
 
+	BUILD_BUG_ON(offsetof(struct sem_array, sem_perm) != 0);
+
 	size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
-	sma = ipc_rcu_alloc(size);
+	sma = container_of(ipc_rcu_alloc(size), struct sem_array, sem_perm);
 	if (!sma)
 		return -ENOMEM;
 
-	memset(sma, 0, size);
-
 	sma->sem_perm.mode = (semflg & S_IRWXUGO);
 	sma->sem_perm.key = key;
 
 	sma->sem_perm.security = NULL;
 	retval = security_sem_alloc(sma);
 	if (retval) {
-		ipc_rcu_putref(sma, ipc_rcu_free);
+		ipc_rcu_putref(&sma->sem_perm, ipc_rcu_free);
 		return retval;
 	}
 
@@ -502,7 +502,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 
 	id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni);
 	if (id < 0) {
-		ipc_rcu_putref(sma, sem_rcu_free);
+		ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 		return id;
 	}
 	ns->used_sems += nsems;
@@ -1122,7 +1122,7 @@ static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
 
 	wake_up_q(&wake_q);
 	ns->used_sems -= sma->sem_nsems;
-	ipc_rcu_putref(sma, sem_rcu_free);
+	ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 }
 
 static unsigned long copy_semid_to_user(void __user *buf, struct semid64_ds *in, int version)
@@ -1362,7 +1362,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 			goto out_unlock;
 		}
 		if (nsems > SEMMSL_FAST) {
-			if (!ipc_rcu_getref(sma)) {
+			if (!ipc_rcu_getref(&sma->sem_perm)) {
 				err = -EIDRM;
 				goto out_unlock;
 			}
@@ -1370,7 +1370,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 			rcu_read_unlock();
 			sem_io = ipc_alloc(sizeof(ushort)*nsems);
 			if (sem_io == NULL) {
-				ipc_rcu_putref(sma, sem_rcu_free);
+				ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 				return -ENOMEM;
 			}
 
@@ -1395,7 +1395,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		int i;
 		struct sem_undo *un;
 
-		if (!ipc_rcu_getref(sma)) {
+		if (!ipc_rcu_getref(&sma->sem_perm)) {
 			err = -EIDRM;
 			goto out_rcu_wakeup;
 		}
@@ -1404,20 +1404,20 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		if (nsems > SEMMSL_FAST) {
 			sem_io = ipc_alloc(sizeof(ushort)*nsems);
 			if (sem_io == NULL) {
-				ipc_rcu_putref(sma, sem_rcu_free);
+				ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 				return -ENOMEM;
 			}
 		}
 
 		if (copy_from_user(sem_io, p, nsems*sizeof(ushort))) {
-			ipc_rcu_putref(sma, sem_rcu_free);
+			ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 			err = -EFAULT;
 			goto out_free;
 		}
 
 		for (i = 0; i < nsems; i++) {
 			if (sem_io[i] > SEMVMX) {
-				ipc_rcu_putref(sma, sem_rcu_free);
+				ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 				err = -ERANGE;
 				goto out_free;
 			}
@@ -1699,7 +1699,7 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
 	}
 
 	nsems = sma->sem_nsems;
-	if (!ipc_rcu_getref(sma)) {
+	if (!ipc_rcu_getref(&sma->sem_perm)) {
 		rcu_read_unlock();
 		un = ERR_PTR(-EIDRM);
 		goto out;
@@ -1709,7 +1709,7 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
 	/* step 2: allocate new undo structure */
 	new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems, GFP_KERNEL);
 	if (!new) {
-		ipc_rcu_putref(sma, sem_rcu_free);
+		ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 		return ERR_PTR(-ENOMEM);
 	}
 
diff --git a/ipc/shm.c b/ipc/shm.c
index 34c4344..2eb85bd 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -174,9 +174,10 @@ static inline void shm_lock_by_ptr(struct shmid_kernel *ipcp)
 
 static void shm_rcu_free(struct rcu_head *head)
 {
-	struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
-	struct shmid_kernel *shp = ipc_rcu_to_struct(p);
-
+	struct kern_ipc_perm *ptr = container_of(head, struct kern_ipc_perm,
+							rcu);
+	struct shmid_kernel *shp = container_of(ptr, struct shmid_kernel,
+							shm_perm);
 	security_shm_free(shp);
 	ipc_rcu_free(head);
 }
@@ -241,7 +242,7 @@ static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
 		user_shm_unlock(i_size_read(file_inode(shm_file)),
 				shp->mlock_user);
 	fput(shm_file);
-	ipc_rcu_putref(shp, shm_rcu_free);
+	ipc_rcu_putref(&shp->shm_perm, shm_rcu_free);
 }
 
 /*
@@ -542,7 +543,10 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 			ns->shm_tot + numpages > ns->shm_ctlall)
 		return -ENOSPC;
 
-	shp = ipc_rcu_alloc(sizeof(*shp));
+	BUILD_BUG_ON(offsetof(struct shmid_kernel, shm_perm) != 0);
+
+	shp = container_of(ipc_rcu_alloc(sizeof(*shp)), struct shmid_kernel,
+				shm_perm);
 	if (!shp)
 		return -ENOMEM;
 
@@ -553,7 +557,7 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 	shp->shm_perm.security = NULL;
 	error = security_shm_alloc(shp);
 	if (error) {
-		ipc_rcu_putref(shp, ipc_rcu_free);
+		ipc_rcu_putref(&shp->shm_perm, ipc_rcu_free);
 		return error;
 	}
 
@@ -624,7 +628,7 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 		user_shm_unlock(size, shp->mlock_user);
 	fput(file);
 no_file:
-	ipc_rcu_putref(shp, shm_rcu_free);
+	ipc_rcu_putref(&shp->shm_perm, shm_rcu_free);
 	return error;
 }
 
diff --git a/ipc/util.c b/ipc/util.c
index caec7b1..9dcc08b 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -418,46 +418,43 @@ void ipc_free(void *ptr)
 }
 
 /**
- * ipc_rcu_alloc - allocate ipc and rcu space
+ * ipc_rcu_alloc - allocate ipc space
  * @size: size desired
  *
- * Allocate memory for the rcu header structure +  the object.
- * Returns the pointer to the object or NULL upon failure.
+ * Allocate memory for an ipc object.
+ * The first member must be struct kern_ipc_perm.
  */
-void *ipc_rcu_alloc(int size)
+struct kern_ipc_perm *ipc_rcu_alloc(int size)
 {
 	/*
 	 * We prepend the allocation with the rcu struct
 	 */
-	struct ipc_rcu *out = ipc_alloc(sizeof(struct ipc_rcu) + size);
+	struct kern_ipc_perm *out = ipc_alloc(size);
 	if (unlikely(!out))
 		return NULL;
 	atomic_set(&out->refcount, 1);
-	return out + 1;
+	return out;
 }
 
-int ipc_rcu_getref(void *ptr)
+int ipc_rcu_getref(struct kern_ipc_perm *ptr)
 {
-	struct ipc_rcu *p = ((struct ipc_rcu *)ptr) - 1;
-
-	return atomic_inc_not_zero(&p->refcount);
+	return atomic_inc_not_zero(&ptr->refcount);
 }
 
-void ipc_rcu_putref(void *ptr, void (*func)(struct rcu_head *head))
+void ipc_rcu_putref(struct kern_ipc_perm *ptr,
+			void (*func)(struct rcu_head *head))
 {
-	struct ipc_rcu *p = ((struct ipc_rcu *)ptr) - 1;
-
-	if (!atomic_dec_and_test(&p->refcount))
+	if (!atomic_dec_and_test(&ptr->refcount))
 		return;
 
-	call_rcu(&p->rcu, func);
+	call_rcu(&ptr->rcu, func);
 }
 
-void ipc_rcu_free(struct rcu_head *head)
+void ipc_rcu_free(struct rcu_head *h)
 {
-	struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
+	struct kern_ipc_perm *ptr = container_of(h, struct kern_ipc_perm, rcu);
 
-	kvfree(p);
+	kvfree(ptr);
 }
 
 /**
diff --git a/ipc/util.h b/ipc/util.h
index 60ddccc..09d0f91 100644
--- a/ipc/util.h
+++ b/ipc/util.h
@@ -47,13 +47,6 @@ static inline void msg_exit_ns(struct ipc_namespace *ns) { }
 static inline void shm_exit_ns(struct ipc_namespace *ns) { }
 #endif
 
-struct ipc_rcu {
-	struct rcu_head rcu;
-	atomic_t refcount;
-} ____cacheline_aligned_in_smp;
-
-#define ipc_rcu_to_struct(p)  ((void *)(p+1))
-
 /*
  * Structure that holds the parameters needed by the ipc operations
  * (see after)
@@ -125,11 +118,14 @@ void ipc_free(void *ptr);
  * Objects are reference counted, they start with reference count 1.
  * getref increases the refcount, the putref call that reduces the recount
  * to 0 schedules the rcu destruction. Caller must guarantee locking.
+ *
+ * struct kern_ipc_perm must be the first member in the allocated structure.
  */
-void *ipc_rcu_alloc(int size);
-int ipc_rcu_getref(void *ptr);
-void ipc_rcu_putref(void *ptr, void (*func)(struct rcu_head *head));
-void ipc_rcu_free(struct rcu_head *head);
+struct kern_ipc_perm *ipc_rcu_alloc(int size);
+int ipc_rcu_getref(struct kern_ipc_perm *ptr);
+void ipc_rcu_putref(struct kern_ipc_perm *ptr,
+			void (*func)(struct rcu_head *head));
+void ipc_rcu_free(struct rcu_head *h);
 
 struct kern_ipc_perm *ipc_lock(struct ipc_ids *, int);
 struct kern_ipc_perm *ipc_obtain_object_idr(struct ipc_ids *ids, int id);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 03/20] include/linux/sem.h: Correctly document sem_ctime
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
  2017-05-25 18:50   ` [PATCH 01/20] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
  2017-05-25 18:50   ` [PATCH 02/20] ipc: merge ipc_rcu and kern_ipc_perm Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 18:50   ` [PATCH 04/20] ipc: Drop non-RCU allocation Manfred Spraul
                     ` (17 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf,
	Manfred Spraul, linux-man

sem_ctime is initialized to the semget() time and then updated at every
semctl() that changes the array.
Thus it does not represent the time of the last change.

Especially, semop() calls are only stored in sem_otime, not in sem_ctime.

This is already described in ipc/sem.c, I just overlooked that there is
a comment in include/linux/sem.h and man semctl(2) as well.

So: Correct wrong comments.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: linux-man@vger.kernel.org
---
 include/linux/sem.h      | 2 +-
 include/uapi/linux/sem.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/sem.h b/include/linux/sem.h
index 9db1409..be5cf2e 100644
--- a/include/linux/sem.h
+++ b/include/linux/sem.h
@@ -30,7 +30,7 @@ struct sem {
 /* One sem_array data structure for each set of semaphores in the system. */
 struct sem_array {
 	struct kern_ipc_perm	sem_perm;	/* permissions .. see ipc.h */
-	time_t			sem_ctime;	/* last change time */
+	time_t			sem_ctime;	/* create/last semctl() time */
 	struct list_head	pending_alter;	/* pending operations */
 						/* that alter the array */
 	struct list_head	pending_const;	/* pending complex operations */
diff --git a/include/uapi/linux/sem.h b/include/uapi/linux/sem.h
index dd73b90..67eb903 100644
--- a/include/uapi/linux/sem.h
+++ b/include/uapi/linux/sem.h
@@ -23,7 +23,7 @@
 struct semid_ds {
 	struct ipc_perm	sem_perm;		/* permissions .. see ipc.h */
 	__kernel_time_t	sem_otime;		/* last semop time */
-	__kernel_time_t	sem_ctime;		/* last change time */
+	__kernel_time_t	sem_ctime;		/* create/last semctl() time */
 	struct sem	*sem_base;		/* ptr to first semaphore in array */
 	struct sem_queue *sem_pending;		/* pending operations to be processed */
 	struct sem_queue **sem_pending_last;	/* last pending operation */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 04/20] ipc: Drop non-RCU allocation
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (2 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 03/20] include/linux/sem.h: Correctly document sem_ctime Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 19:35     ` Kees Cook
  2017-05-25 18:50   ` [PATCH 05/20] ipc/sem: Do not use ipc_rcu_free() Manfred Spraul
                     ` (16 subsequent siblings)
  20 siblings, 1 reply; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

The only users of ipc_alloc() were ipc_rcu_alloc() and the on-heap
sem_io fall-back memory. Better to just open-code these to make things
easier to read.

Signed-off-by: Kees Cook <keescook@chromium.org>

[manfred@colorfullife.com: Rediff due to inclusion of memset() into
ipc_rcu_alloc().]

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/sem.c  |  8 +++++---
 ipc/util.c | 27 +++------------------------
 ipc/util.h |  6 ------
 3 files changed, 8 insertions(+), 33 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index bdff6d9..484ccf8 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1368,7 +1368,8 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 			}
 			sem_unlock(sma, -1);
 			rcu_read_unlock();
-			sem_io = ipc_alloc(sizeof(ushort)*nsems);
+			sem_io = kvmalloc_array(nsems, sizeof(ushort),
+						GFP_KERNEL);
 			if (sem_io == NULL) {
 				ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 				return -ENOMEM;
@@ -1402,7 +1403,8 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		rcu_read_unlock();
 
 		if (nsems > SEMMSL_FAST) {
-			sem_io = ipc_alloc(sizeof(ushort)*nsems);
+			sem_io = kvmalloc_array(nsems, sizeof(ushort),
+						GFP_KERNEL);
 			if (sem_io == NULL) {
 				ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 				return -ENOMEM;
@@ -1480,7 +1482,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 	wake_up_q(&wake_q);
 out_free:
 	if (sem_io != fast_sem_io)
-		ipc_free(sem_io);
+		kvfree(sem_io);
 	return err;
 }
 
diff --git a/ipc/util.c b/ipc/util.c
index 9dcc08b..dd73feb 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -395,29 +395,6 @@ void ipc_rmid(struct ipc_ids *ids, struct kern_ipc_perm *ipcp)
 }
 
 /**
- * ipc_alloc -	allocate ipc space
- * @size: size desired
- *
- * Allocate memory from the appropriate pools and return a pointer to it.
- * NULL is returned if the allocation fails
- */
-void *ipc_alloc(int size)
-{
-	return kvmalloc(size, GFP_KERNEL);
-}
-
-/**
- * ipc_free - free ipc space
- * @ptr: pointer returned by ipc_alloc
- *
- * Free a block created with ipc_alloc().
- */
-void ipc_free(void *ptr)
-{
-	kvfree(ptr);
-}
-
-/**
  * ipc_rcu_alloc - allocate ipc space
  * @size: size desired
  *
@@ -429,9 +406,11 @@ struct kern_ipc_perm *ipc_rcu_alloc(int size)
 	/*
 	 * We prepend the allocation with the rcu struct
 	 */
-	struct kern_ipc_perm *out = ipc_alloc(size);
+	struct kern_ipc_perm *out = kvmalloc(size, GFP_KERNEL);
 	if (unlikely(!out))
 		return NULL;
+
+	memset(out, 0, size);
 	atomic_set(&out->refcount, 1);
 	return out;
 }
diff --git a/ipc/util.h b/ipc/util.h
index 09d0f91..2578fd9 100644
--- a/ipc/util.h
+++ b/ipc/util.h
@@ -107,12 +107,6 @@ void ipc_rmid(struct ipc_ids *, struct kern_ipc_perm *);
 /* must be called with ipcp locked */
 int ipcperms(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp, short flg);
 
-/* for rare, potentially huge allocations.
- * both function can sleep
- */
-void *ipc_alloc(int size);
-void ipc_free(void *ptr);
-
 /*
  * For allocation that need to be freed by RCU.
  * Objects are reference counted, they start with reference count 1.
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 05/20] ipc/sem: Do not use ipc_rcu_free()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (3 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 04/20] ipc: Drop non-RCU allocation Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 18:50   ` [PATCH 06/20] ipc/shm: " Manfred Spraul
                     ` (15 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

Avoid using ipc_rcu_free, since it just re-finds the original structure
pointer. For the pre-list-init failure path, there is no RCU needed,
since it was just allocated. It can be directly freed.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/sem.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 484ccf8..a04c4d6 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -258,13 +258,18 @@ static void merge_queues(struct sem_array *sma)
 	}
 }
 
+static void __sem_free(struct sem_array *sma)
+{
+	kvfree(sma);
+}
+
 static void sem_rcu_free(struct rcu_head *head)
 {
 	struct kern_ipc_perm *p = container_of(head, struct kern_ipc_perm, rcu);
 	struct sem_array *sma = container_of(p, struct sem_array, sem_perm);
 
 	security_sem_free(sma);
-	ipc_rcu_free(head);
+	__sem_free(sma);
 }
 
 /*
@@ -482,7 +487,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	sma->sem_perm.security = NULL;
 	retval = security_sem_alloc(sma);
 	if (retval) {
-		ipc_rcu_putref(&sma->sem_perm, ipc_rcu_free);
+		__sem_free(sma);
 		return retval;
 	}
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 06/20] ipc/shm: Do not use ipc_rcu_free()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (4 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 05/20] ipc/sem: Do not use ipc_rcu_free() Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 18:50   ` [PATCH 07/20] ipc/msg: " Manfred Spraul
                     ` (14 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

Avoid using ipc_rcu_free, since it just re-finds the original structure
pointer. For the pre-list-init failure path, there is no RCU needed,
since it was just allocated. It can be directly freed.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/shm.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/ipc/shm.c b/ipc/shm.c
index 2eb85bd..77e1bff 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -172,6 +172,11 @@ static inline void shm_lock_by_ptr(struct shmid_kernel *ipcp)
 	ipc_lock_object(&ipcp->shm_perm);
 }
 
+static void __shm_free(struct shmid_kernel *shp)
+{
+	kvfree(shp);
+}
+
 static void shm_rcu_free(struct rcu_head *head)
 {
 	struct kern_ipc_perm *ptr = container_of(head, struct kern_ipc_perm,
@@ -179,7 +184,7 @@ static void shm_rcu_free(struct rcu_head *head)
 	struct shmid_kernel *shp = container_of(ptr, struct shmid_kernel,
 							shm_perm);
 	security_shm_free(shp);
-	ipc_rcu_free(head);
+	__shm_free(shp);
 }
 
 static inline void shm_rmid(struct ipc_namespace *ns, struct shmid_kernel *s)
@@ -557,7 +562,7 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 	shp->shm_perm.security = NULL;
 	error = security_shm_alloc(shp);
 	if (error) {
-		ipc_rcu_putref(&shp->shm_perm, ipc_rcu_free);
+		__shm_free(shp);
 		return error;
 	}
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 07/20] ipc/msg: Do not use ipc_rcu_free()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (5 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 06/20] ipc/shm: " Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 18:50   ` [PATCH 08/20] ipc/util: Drop ipc_rcu_free() Manfred Spraul
                     ` (13 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

Avoid using ipc_rcu_free, since it just re-finds the original structure
pointer. For the pre-list-init failure path, there is no RCU needed,
since it was just allocated. It can be directly freed.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/msg.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/ipc/msg.c b/ipc/msg.c
index 0ed7dae..25d43e2 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -95,13 +95,18 @@ static inline void msg_rmid(struct ipc_namespace *ns, struct msg_queue *s)
 	ipc_rmid(&msg_ids(ns), &s->q_perm);
 }
 
+static void __msg_free(struct msg_queue *msq)
+{
+	kvfree(msq);
+}
+
 static void msg_rcu_free(struct rcu_head *head)
 {
 	struct kern_ipc_perm *p = container_of(head, struct kern_ipc_perm, rcu);
 	struct msg_queue *msq = container_of(p, struct msg_queue, q_perm);
 
 	security_msg_queue_free(msq);
-	ipc_rcu_free(head);
+	__msg_free(msq);
 }
 
 /**
@@ -131,7 +136,7 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	msq->q_perm.security = NULL;
 	retval = security_msg_queue_alloc(msq);
 	if (retval) {
-		ipc_rcu_putref(&msq->q_perm, ipc_rcu_free);
+		__msg_free(msq);
 		return retval;
 	}
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 08/20] ipc/util: Drop ipc_rcu_free()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (6 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 07/20] ipc/msg: " Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 18:50   ` [PATCH 09/20] ipc/sem: Avoid ipc_rcu_alloc() Manfred Spraul
                     ` (12 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

There are no more callers of ipc_rcu_free(), so remove it.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/util.c | 7 -------
 ipc/util.h | 1 -
 2 files changed, 8 deletions(-)

diff --git a/ipc/util.c b/ipc/util.c
index dd73feb..556884b 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -429,13 +429,6 @@ void ipc_rcu_putref(struct kern_ipc_perm *ptr,
 	call_rcu(&ptr->rcu, func);
 }
 
-void ipc_rcu_free(struct rcu_head *h)
-{
-	struct kern_ipc_perm *ptr = container_of(h, struct kern_ipc_perm, rcu);
-
-	kvfree(ptr);
-}
-
 /**
  * ipcperms - check ipc permissions
  * @ns: ipc namespace
diff --git a/ipc/util.h b/ipc/util.h
index 2578fd9..44efbc0 100644
--- a/ipc/util.h
+++ b/ipc/util.h
@@ -119,7 +119,6 @@ struct kern_ipc_perm *ipc_rcu_alloc(int size);
 int ipc_rcu_getref(struct kern_ipc_perm *ptr);
 void ipc_rcu_putref(struct kern_ipc_perm *ptr,
 			void (*func)(struct rcu_head *head));
-void ipc_rcu_free(struct rcu_head *h);
 
 struct kern_ipc_perm *ipc_lock(struct ipc_ids *, int);
 struct kern_ipc_perm *ipc_obtain_object_idr(struct ipc_ids *ids, int id);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 09/20] ipc/sem: Avoid ipc_rcu_alloc()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (7 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 08/20] ipc/util: Drop ipc_rcu_free() Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 18:50   ` [PATCH 10/20] ipc/shm: " Manfred Spraul
                     ` (11 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it to perform better sem-specific checks. This
also allows for sem_array structure layout to be randomized in the
future.

Signed-off-by: Kees Cook <keescook@chromium.org>

[manfred@colorfullife.com: Rediff, because the memset was
temporarily inside ipc_rcu_alloc()]

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/sem.c | 25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index a04c4d6..445a5b5 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -451,6 +451,25 @@ static inline void sem_rmid(struct ipc_namespace *ns, struct sem_array *s)
 	ipc_rmid(&sem_ids(ns), &s->sem_perm);
 }
 
+static struct sem_array *sem_alloc(size_t nsems)
+{
+	struct sem_array *sma;
+	size_t size;
+
+	if (nsems > (INT_MAX - sizeof(*sma)) / sizeof(sma->sems[0]))
+		return NULL;
+
+	size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
+	sma = kvmalloc(size, GFP_KERNEL);
+	if (unlikely(!sma))
+		return NULL;
+
+	memset(sma, 0, size);
+	atomic_set(&sma->sem_perm.refcount, 1);
+
+	return sma;
+}
+
 /**
  * newary - Create a new semaphore set
  * @ns: namespace
@@ -463,7 +482,6 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	int id;
 	int retval;
 	struct sem_array *sma;
-	int size;
 	key_t key = params->key;
 	int nsems = params->u.nsems;
 	int semflg = params->flg;
@@ -474,10 +492,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	if (ns->used_sems + nsems > ns->sc_semmns)
 		return -ENOSPC;
 
-	BUILD_BUG_ON(offsetof(struct sem_array, sem_perm) != 0);
-
-	size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
-	sma = container_of(ipc_rcu_alloc(size), struct sem_array, sem_perm);
+	sma = sem_alloc(nsems);
 	if (!sma)
 		return -ENOMEM;
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 10/20] ipc/shm: Avoid ipc_rcu_alloc()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (8 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 09/20] ipc/sem: Avoid ipc_rcu_alloc() Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 18:50   ` [PATCH 11/20] ipc/msg: " Manfred Spraul
                     ` (10 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for shmid_kernel structure
layout to be randomized in the future.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/shm.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/ipc/shm.c b/ipc/shm.c
index 77e1bff..c9f1f30 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -518,6 +518,19 @@ static const struct vm_operations_struct shm_vm_ops = {
 #endif
 };
 
+static struct shmid_kernel *shm_alloc(void)
+{
+	struct shmid_kernel *shp;
+
+	shp = kvmalloc(sizeof(*shp), GFP_KERNEL);
+	if (unlikely(!shp))
+		return NULL;
+
+	atomic_set(&shp->shm_perm.refcount, 1);
+
+	return shp;
+}
+
 /**
  * newseg - Create a new shared memory segment
  * @ns: namespace
@@ -548,10 +561,7 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 			ns->shm_tot + numpages > ns->shm_ctlall)
 		return -ENOSPC;
 
-	BUILD_BUG_ON(offsetof(struct shmid_kernel, shm_perm) != 0);
-
-	shp = container_of(ipc_rcu_alloc(sizeof(*shp)), struct shmid_kernel,
-				shm_perm);
+	shp = shm_alloc();
 	if (!shp)
 		return -ENOMEM;
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 11/20] ipc/msg: Avoid ipc_rcu_alloc()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (9 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 10/20] ipc/shm: " Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 18:50   ` [PATCH 12/20] ipc/util: Drop ipc_rcu_alloc() Manfred Spraul
                     ` (9 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for msg_queue structure
layout to be randomized in the future.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/msg.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/ipc/msg.c b/ipc/msg.c
index 25d43e2..10094a7 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -109,6 +109,19 @@ static void msg_rcu_free(struct rcu_head *head)
 	__msg_free(msq);
 }
 
+static struct msg_queue *msg_alloc(void)
+{
+	struct msg_queue *msq;
+
+	msq = kvmalloc(sizeof(*msq), GFP_KERNEL);
+	if (unlikely(!msq))
+		return NULL;
+
+	atomic_set(&msq->q_perm.refcount, 1);
+
+	return msq;
+}
+
 /**
  * newque - Create a new msg queue
  * @ns: namespace
@@ -123,10 +136,7 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	key_t key = params->key;
 	int msgflg = params->flg;
 
-	BUILD_BUG_ON(offsetof(struct msg_queue, q_perm) != 0);
-
-	msq = container_of(ipc_rcu_alloc(sizeof(*msq)), struct msg_queue,
-				q_perm);
+	msq = msg_alloc();
 	if (!msq)
 		return -ENOMEM;
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 12/20] ipc/util: Drop ipc_rcu_alloc()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (10 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 11/20] ipc/msg: " Manfred Spraul
@ 2017-05-25 18:50   ` Manfred Spraul
  2017-05-25 18:51   ` [PATCH 13/20] ipc/sem.c: Avoid ipc_rcu_putref for failed ipc_addid() Manfred Spraul
                     ` (8 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:50 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

No callers remain for ipc_rcu_alloc(). Drop the function.

Signed-off-by: Kees Cook <keescook@chromium.org>

[manfred@colorfullife.com: Rediff because the memset was
temporarily inside ipc_rcu_free()]

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/util.c | 21 ---------------------
 ipc/util.h |  3 ---
 2 files changed, 24 deletions(-)

diff --git a/ipc/util.c b/ipc/util.c
index 556884b..2428dd4 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -394,27 +394,6 @@ void ipc_rmid(struct ipc_ids *ids, struct kern_ipc_perm *ipcp)
 	ipcp->deleted = true;
 }
 
-/**
- * ipc_rcu_alloc - allocate ipc space
- * @size: size desired
- *
- * Allocate memory for an ipc object.
- * The first member must be struct kern_ipc_perm.
- */
-struct kern_ipc_perm *ipc_rcu_alloc(int size)
-{
-	/*
-	 * We prepend the allocation with the rcu struct
-	 */
-	struct kern_ipc_perm *out = kvmalloc(size, GFP_KERNEL);
-	if (unlikely(!out))
-		return NULL;
-
-	memset(out, 0, size);
-	atomic_set(&out->refcount, 1);
-	return out;
-}
-
 int ipc_rcu_getref(struct kern_ipc_perm *ptr)
 {
 	return atomic_inc_not_zero(&ptr->refcount);
diff --git a/ipc/util.h b/ipc/util.h
index 44efbc0..77336c2b 100644
--- a/ipc/util.h
+++ b/ipc/util.h
@@ -112,10 +112,7 @@ int ipcperms(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp, short flg);
  * Objects are reference counted, they start with reference count 1.
  * getref increases the refcount, the putref call that reduces the recount
  * to 0 schedules the rcu destruction. Caller must guarantee locking.
- *
- * struct kern_ipc_perm must be the first member in the allocated structure.
  */
-struct kern_ipc_perm *ipc_rcu_alloc(int size);
 int ipc_rcu_getref(struct kern_ipc_perm *ptr);
 void ipc_rcu_putref(struct kern_ipc_perm *ptr,
 			void (*func)(struct rcu_head *head));
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 13/20] ipc/sem.c: Avoid ipc_rcu_putref for failed ipc_addid()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (11 preceding siblings ...)
  2017-05-25 18:50   ` [PATCH 12/20] ipc/util: Drop ipc_rcu_alloc() Manfred Spraul
@ 2017-05-25 18:51   ` Manfred Spraul
  2017-05-25 18:51   ` [PATCH 14/20] ipc/shm.c: " Manfred Spraul
                     ` (7 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:51 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

Loosely based on a patch from Kees Cook <keescook@chromium.org>:
- id and retval can be merged
- if ipc_addid() fails, then use call_rcu() directly.

The difference is that call_rcu is used for failed ipc_addid() calls,
to continue to guaranteed an rcu delay for security_sem_free().

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Kees Cook <keescook@chromium.org>
---
 ipc/sem.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 445a5b5..2b2ed56 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -479,7 +479,6 @@ static struct sem_array *sem_alloc(size_t nsems)
  */
 static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 {
-	int id;
 	int retval;
 	struct sem_array *sma;
 	key_t key = params->key;
@@ -520,10 +519,10 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	sma->sem_nsems = nsems;
 	sma->sem_ctime = get_seconds();
 
-	id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni);
-	if (id < 0) {
-		ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
-		return id;
+	retval = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni);
+	if (retval < 0) {
+		call_rcu(&sma->sem_perm.rcu, sem_rcu_free);
+		return retval;
 	}
 	ns->used_sems += nsems;
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 14/20] ipc/shm.c: Avoid ipc_rcu_putref for failed ipc_addid()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (12 preceding siblings ...)
  2017-05-25 18:51   ` [PATCH 13/20] ipc/sem.c: Avoid ipc_rcu_putref for failed ipc_addid() Manfred Spraul
@ 2017-05-25 18:51   ` Manfred Spraul
  2017-05-25 18:51   ` [PATCH 15/20] ipc/msg.c: " Manfred Spraul
                     ` (6 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:51 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

Loosely based on a patch from Kees Cook <keescook@chromium.org>:
- id and error can be merged
- if operations before ipc_addid() fail, then use call_rcu() directly.

The difference is that call_rcu is used for failures after
security_shm_alloc(), to continue to guaranteed an rcu delay for
security_sem_free().

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Kees Cook <keescook@chromium.org>
---
 ipc/shm.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/ipc/shm.c b/ipc/shm.c
index c9f1f30..cb1d97e 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -548,7 +548,6 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 	size_t numpages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
 	struct file *file;
 	char name[13];
-	int id;
 	vm_flags_t acctflag = 0;
 
 	if (size < SHMMIN || size > ns->shm_ctlmax)
@@ -617,11 +616,9 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 	shp->shm_file = file;
 	shp->shm_creator = current;
 
-	id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);
-	if (id < 0) {
-		error = id;
+	error = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);
+	if (error < 0)
 		goto no_id;
-	}
 
 	list_add(&shp->shm_clist, &current->sysvshm.shm_clist);
 
@@ -643,7 +640,7 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 		user_shm_unlock(size, shp->mlock_user);
 	fput(file);
 no_file:
-	ipc_rcu_putref(&shp->shm_perm, shm_rcu_free);
+	call_rcu(&shp->shm_perm.rcu, shm_rcu_free);
 	return error;
 }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 15/20] ipc/msg.c: Avoid ipc_rcu_putref for failed ipc_addid()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (13 preceding siblings ...)
  2017-05-25 18:51   ` [PATCH 14/20] ipc/shm.c: " Manfred Spraul
@ 2017-05-25 18:51   ` Manfred Spraul
  2017-05-25 18:51   ` [PATCH 16/20] ipc: Move atomic_set() to where it is needed Manfred Spraul
                     ` (5 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:51 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

Loosely based on a patch from Kees Cook <keescook@chromium.org>:
- id and retval can be merged
- if ipc_addid() fails, then use call_rcu() directly.

The difference is that call_rcu is used for failed ipc_addid() calls,
to continue to guaranteed an rcu delay for security_msg_queue_free().

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Kees Cook <keescook@chromium.org>
---
 ipc/msg.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/ipc/msg.c b/ipc/msg.c
index 10094a7..cd90bfd 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -132,7 +132,7 @@ static struct msg_queue *msg_alloc(void)
 static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 {
 	struct msg_queue *msq;
-	int id, retval;
+	int retval;
 	key_t key = params->key;
 	int msgflg = params->flg;
 
@@ -160,10 +160,10 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	INIT_LIST_HEAD(&msq->q_senders);
 
 	/* ipc_addid() locks msq upon success. */
-	id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
-	if (id < 0) {
-		ipc_rcu_putref(&msq->q_perm, msg_rcu_free);
-		return id;
+	retval = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
+	if (retval < 0) {
+		call_rcu(&msq->q_perm.rcu, msg_rcu_free);
+		return retval;
 	}
 
 	ipc_unlock_object(&msq->q_perm);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 16/20] ipc: Move atomic_set() to where it is needed
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (14 preceding siblings ...)
  2017-05-25 18:51   ` [PATCH 15/20] ipc/msg.c: " Manfred Spraul
@ 2017-05-25 18:51   ` Manfred Spraul
  2017-05-25 18:51   ` [PATCH 17/20] ipc/shm: Remove special shm_alloc/free Manfred Spraul
                     ` (4 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:51 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

Only after ipc_addid() has succeeded will refcounting be used, so
move initialization into ipc_addid() and remove from open-coded
*_alloc() routines.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/msg.c  | 2 --
 ipc/sem.c  | 1 -
 ipc/shm.c  | 2 --
 ipc/util.c | 1 +
 4 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/ipc/msg.c b/ipc/msg.c
index cd90bfd..770342e 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -117,8 +117,6 @@ static struct msg_queue *msg_alloc(void)
 	if (unlikely(!msq))
 		return NULL;
 
-	atomic_set(&msq->q_perm.refcount, 1);
-
 	return msq;
 }
 
diff --git a/ipc/sem.c b/ipc/sem.c
index 2b2ed56..5f13773 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -465,7 +465,6 @@ static struct sem_array *sem_alloc(size_t nsems)
 		return NULL;
 
 	memset(sma, 0, size);
-	atomic_set(&sma->sem_perm.refcount, 1);
 
 	return sma;
 }
diff --git a/ipc/shm.c b/ipc/shm.c
index cb1d97e..b85db5a 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -526,8 +526,6 @@ static struct shmid_kernel *shm_alloc(void)
 	if (unlikely(!shp))
 		return NULL;
 
-	atomic_set(&shp->shm_perm.refcount, 1);
-
 	return shp;
 }
 
diff --git a/ipc/util.c b/ipc/util.c
index 2428dd4..1a2cb02 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -232,6 +232,7 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size)
 
 	idr_preload(GFP_KERNEL);
 
+	atomic_set(&new->refcount, 1);
 	spin_lock_init(&new->lock);
 	new->deleted = false;
 	rcu_read_lock();
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 17/20] ipc/shm: Remove special shm_alloc/free
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (15 preceding siblings ...)
  2017-05-25 18:51   ` [PATCH 16/20] ipc: Move atomic_set() to where it is needed Manfred Spraul
@ 2017-05-25 18:51   ` Manfred Spraul
  2017-05-25 18:51   ` [PATCH 18/20] ipc/msg: Remove special msg_alloc/free Manfred Spraul
                     ` (3 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:51 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

There is nothing special about the shm_alloc/free routines any more,
so remove them to make code more readable.

Signed-off-by: Kees Cook <keescook@chromium.org>

[manfred@colorfullife.com: Rediff, to continue to keep rcu for
free calls after a successful security_shm_alloc()]
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/shm.c | 24 ++++--------------------
 1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/ipc/shm.c b/ipc/shm.c
index b85db5a..ec5688e 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -172,11 +172,6 @@ static inline void shm_lock_by_ptr(struct shmid_kernel *ipcp)
 	ipc_lock_object(&ipcp->shm_perm);
 }
 
-static void __shm_free(struct shmid_kernel *shp)
-{
-	kvfree(shp);
-}
-
 static void shm_rcu_free(struct rcu_head *head)
 {
 	struct kern_ipc_perm *ptr = container_of(head, struct kern_ipc_perm,
@@ -184,7 +179,7 @@ static void shm_rcu_free(struct rcu_head *head)
 	struct shmid_kernel *shp = container_of(ptr, struct shmid_kernel,
 							shm_perm);
 	security_shm_free(shp);
-	__shm_free(shp);
+	kvfree(shp);
 }
 
 static inline void shm_rmid(struct ipc_namespace *ns, struct shmid_kernel *s)
@@ -518,17 +513,6 @@ static const struct vm_operations_struct shm_vm_ops = {
 #endif
 };
 
-static struct shmid_kernel *shm_alloc(void)
-{
-	struct shmid_kernel *shp;
-
-	shp = kvmalloc(sizeof(*shp), GFP_KERNEL);
-	if (unlikely(!shp))
-		return NULL;
-
-	return shp;
-}
-
 /**
  * newseg - Create a new shared memory segment
  * @ns: namespace
@@ -558,8 +542,8 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 			ns->shm_tot + numpages > ns->shm_ctlall)
 		return -ENOSPC;
 
-	shp = shm_alloc();
-	if (!shp)
+	shp = kvmalloc(sizeof(*shp), GFP_KERNEL);
+	if (unlikely(!shp))
 		return -ENOMEM;
 
 	shp->shm_perm.key = key;
@@ -569,7 +553,7 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 	shp->shm_perm.security = NULL;
 	error = security_shm_alloc(shp);
 	if (error) {
-		__shm_free(shp);
+		kvfree(shp);
 		return error;
 	}
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 18/20] ipc/msg: Remove special msg_alloc/free
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (16 preceding siblings ...)
  2017-05-25 18:51   ` [PATCH 17/20] ipc/shm: Remove special shm_alloc/free Manfred Spraul
@ 2017-05-25 18:51   ` Manfred Spraul
  2017-05-25 18:51   ` [PATCH 19/20] ipc/sem: Drop __sem_free() Manfred Spraul
                     ` (2 subsequent siblings)
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:51 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

There is nothing special about the msg_alloc/free routines any more,
so remove them to make code more readable.

Signed-off-by: Kees Cook <keescook@chromium.org>

[manfred@colorfullife.com: Rediff to keep rcu protection for
security_msg_queue_alloc()]
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/msg.c | 24 ++++--------------------
 1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/ipc/msg.c b/ipc/msg.c
index 770342e..5b25e07 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -95,29 +95,13 @@ static inline void msg_rmid(struct ipc_namespace *ns, struct msg_queue *s)
 	ipc_rmid(&msg_ids(ns), &s->q_perm);
 }
 
-static void __msg_free(struct msg_queue *msq)
-{
-	kvfree(msq);
-}
-
 static void msg_rcu_free(struct rcu_head *head)
 {
 	struct kern_ipc_perm *p = container_of(head, struct kern_ipc_perm, rcu);
 	struct msg_queue *msq = container_of(p, struct msg_queue, q_perm);
 
 	security_msg_queue_free(msq);
-	__msg_free(msq);
-}
-
-static struct msg_queue *msg_alloc(void)
-{
-	struct msg_queue *msq;
-
-	msq = kvmalloc(sizeof(*msq), GFP_KERNEL);
-	if (unlikely(!msq))
-		return NULL;
-
-	return msq;
+	kvfree(msq);
 }
 
 /**
@@ -134,8 +118,8 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	key_t key = params->key;
 	int msgflg = params->flg;
 
-	msq = msg_alloc();
-	if (!msq)
+	msq = kvmalloc(sizeof(*msq), GFP_KERNEL);
+	if (unlikely(!msq))
 		return -ENOMEM;
 
 	msq->q_perm.mode = msgflg & S_IRWXUGO;
@@ -144,7 +128,7 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
 	msq->q_perm.security = NULL;
 	retval = security_msg_queue_alloc(msq);
 	if (retval) {
-		__msg_free(msq);
+		kvfree(msq);
 		return retval;
 	}
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 19/20] ipc/sem: Drop __sem_free()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (17 preceding siblings ...)
  2017-05-25 18:51   ` [PATCH 18/20] ipc/msg: Remove special msg_alloc/free Manfred Spraul
@ 2017-05-25 18:51   ` Manfred Spraul
  2017-05-25 18:51   ` [PATCH 20/20] ipc/util.h: Update documentation for ipc_getref() and ipc_putref() Manfred Spraul
  2017-05-25 19:45   ` [PATCH 0/20 V3] Misc cleanups for ipc Kees Cook
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:51 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

From: Kees Cook <keescook@chromium.org>

The remaining users of __sem_free() can simply call kvfree() instead for
better readability.

Signed-off-by: Kees Cook <keescook@chromium.org>

[manfred@colorfullife.com: Rediff to keep rcu protection for
security_sem_alloc()]
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/sem.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 5f13773..9e70cd7 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -258,18 +258,13 @@ static void merge_queues(struct sem_array *sma)
 	}
 }
 
-static void __sem_free(struct sem_array *sma)
-{
-	kvfree(sma);
-}
-
 static void sem_rcu_free(struct rcu_head *head)
 {
 	struct kern_ipc_perm *p = container_of(head, struct kern_ipc_perm, rcu);
 	struct sem_array *sma = container_of(p, struct sem_array, sem_perm);
 
 	security_sem_free(sma);
-	__sem_free(sma);
+	kvfree(sma);
 }
 
 /*
@@ -500,7 +495,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
 	sma->sem_perm.security = NULL;
 	retval = security_sem_alloc(sma);
 	if (retval) {
-		__sem_free(sma);
+		kvfree(sma);
 		return retval;
 	}
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 20/20] ipc/util.h: Update documentation for ipc_getref() and ipc_putref()
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (18 preceding siblings ...)
  2017-05-25 18:51   ` [PATCH 19/20] ipc/sem: Drop __sem_free() Manfred Spraul
@ 2017-05-25 18:51   ` Manfred Spraul
  2017-05-25 19:45   ` [PATCH 0/20 V3] Misc cleanups for ipc Kees Cook
  20 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-25 18:51 UTC (permalink / raw)
  To: mtk.manpages, Andrew Morton, Kees Cook
  Cc: LKML, 1vier1, Davidlohr Bueso, mingo, peterz, fabf, Manfred Spraul

Now that ipc_rcu_alloc() and ipc_rcu_free() are removed,
document when it is valid to use ipc_getref() and ipc_putref().

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/util.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/ipc/util.h b/ipc/util.h
index 77336c2b..c692010 100644
--- a/ipc/util.h
+++ b/ipc/util.h
@@ -112,6 +112,9 @@ int ipcperms(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp, short flg);
  * Objects are reference counted, they start with reference count 1.
  * getref increases the refcount, the putref call that reduces the recount
  * to 0 schedules the rcu destruction. Caller must guarantee locking.
+ *
+ * refcount is initialized by ipc_addid(), before that point call_rcu()
+ * must be used.
  */
 int ipc_rcu_getref(struct kern_ipc_perm *ptr);
 void ipc_rcu_putref(struct kern_ipc_perm *ptr,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH 02/20] ipc: merge ipc_rcu and kern_ipc_perm
  2017-05-25 18:50   ` [PATCH 02/20] ipc: merge ipc_rcu and kern_ipc_perm Manfred Spraul
@ 2017-05-25 19:34     ` Kees Cook
  2017-05-26  3:37       ` Kees Cook
  0 siblings, 1 reply; 38+ messages in thread
From: Kees Cook @ 2017-05-25 19:34 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: Michael Kerrisk, Andrew Morton, LKML, 1vier1, Davidlohr Bueso,
	Ingo Molnar, Peter Zijlstra, Fabian Frederick

On Thu, May 25, 2017 at 11:50 AM, Manfred Spraul
<manfred@colorfullife.com> wrote:
> ipc has two management structures that exist for every id:
> - struct kern_ipc_perm, it contains e.g. the permissions.
> - struct ipc_rcu, it contains the rcu head for rcu handling and
>   the refcount.
>
> The patch merges both structures.
> As a bonus, we may save one cacheline, because both structures are
> cacheline aligned.
> In addition, it reduces the number of casts, instead most codepaths can
> use container_of.
>
> To simplify code, the ipc_rcu_alloc initializes the allocation to 0.

I don't see this change in the code, only the removal of sem's
memset(..., 0, ...)?

> diff --git a/ipc/sem.c b/ipc/sem.c
> index fff8337..bdff6d9 100644
> --- a/ipc/sem.c
> +++ b/ipc/sem.c
> @@ -469,20 +469,20 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
>         if (ns->used_sems + nsems > ns->sc_semmns)
>                 return -ENOSPC;
>
> +       BUILD_BUG_ON(offsetof(struct sem_array, sem_perm) != 0);
> +
>         size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
> -       sma = ipc_rcu_alloc(size);
> +       sma = container_of(ipc_rcu_alloc(size), struct sem_array, sem_perm);
>         if (!sma)
>                 return -ENOMEM;
>
> -       memset(sma, 0, size);
> -
>         sma->sem_perm.mode = (semflg & S_IRWXUGO);
>         sma->sem_perm.key = key;
>
>         sma->sem_perm.security = NULL;
>         retval = security_sem_alloc(sma);
>         if (retval) {
> -               ipc_rcu_putref(sma, ipc_rcu_free);
> +               ipc_rcu_putref(&sma->sem_perm, ipc_rcu_free);
>                 return retval;
>         }
>
> [...]
> diff --git a/ipc/util.c b/ipc/util.c
> index caec7b1..9dcc08b 100644
> --- a/ipc/util.c
> +++ b/ipc/util.c
> @@ -418,46 +418,43 @@ void ipc_free(void *ptr)
>  }
>
>  /**
> - * ipc_rcu_alloc - allocate ipc and rcu space
> + * ipc_rcu_alloc - allocate ipc space
>   * @size: size desired
>   *
> - * Allocate memory for the rcu header structure +  the object.
> - * Returns the pointer to the object or NULL upon failure.
> + * Allocate memory for an ipc object.
> + * The first member must be struct kern_ipc_perm.
>   */
> -void *ipc_rcu_alloc(int size)
> +struct kern_ipc_perm *ipc_rcu_alloc(int size)
>  {
>         /*
>          * We prepend the allocation with the rcu struct
>          */
> -       struct ipc_rcu *out = ipc_alloc(sizeof(struct ipc_rcu) + size);
> +       struct kern_ipc_perm *out = ipc_alloc(size);
>         if (unlikely(!out))
>                 return NULL;
>         atomic_set(&out->refcount, 1);
> -       return out + 1;
> +       return out;
>  }

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 04/20] ipc: Drop non-RCU allocation
  2017-05-25 18:50   ` [PATCH 04/20] ipc: Drop non-RCU allocation Manfred Spraul
@ 2017-05-25 19:35     ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2017-05-25 19:35 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: Michael Kerrisk, Andrew Morton, LKML, 1vier1, Davidlohr Bueso,
	Ingo Molnar, Peter Zijlstra, Fabian Frederick

On Thu, May 25, 2017 at 11:50 AM, Manfred Spraul
<manfred@colorfullife.com> wrote:
> The only users of ipc_alloc() were ipc_rcu_alloc() and the on-heap
> sem_io fall-back memory. Better to just open-code these to make things
> easier to read.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
>
> [manfred@colorfullife.com: Rediff due to inclusion of memset() into
> ipc_rcu_alloc().]

Oh! I see, the hunk snuck into this patch, rather that patch 3.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 01/20] ipc/sem.c: remove sem_base, embed struct sem
  2017-05-25 18:50   ` [PATCH 01/20] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
@ 2017-05-25 19:43     ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2017-05-25 19:43 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: Michael Kerrisk, Andrew Morton, LKML, 1vier1, Davidlohr Bueso,
	Ingo Molnar, Peter Zijlstra, Fabian Frederick

On Thu, May 25, 2017 at 11:50 AM, Manfred Spraul
<manfred@colorfullife.com> wrote:
> sma->sem_base is initialized with
>         sma->sem_base = (struct sem *) &sma[1];
>
> The current code has four problems:
> - There is an unnecessary pointer dereference - sem_base is not needed.
> - Alignment for struct sem only works by chance.
> - The current code causes false positive for static code analysis.
> - This is a cast between different non-void types, which the future
>   randstruct GCC plugin warns on.
>
> And, as bonus, the code size gets smaller:
>
> Before:
>   0 .text         00003770
> After:
>   0 .text         0000374e
>
> Signed-off-by: Manfred Spraul <manfred@colorfullife.com>

Acked-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
>  include/linux/sem.h | 22 +++++++++++++-
>  ipc/sem.c           | 88 +++++++++++++++++++++--------------------------------
>  2 files changed, 55 insertions(+), 55 deletions(-)
>
> diff --git a/include/linux/sem.h b/include/linux/sem.h
> index 9edec92..9db1409 100644
> --- a/include/linux/sem.h
> +++ b/include/linux/sem.h
> @@ -8,11 +8,29 @@
>
>  struct task_struct;
>
> +/* One semaphore structure for each semaphore in the system. */
> +struct sem {
> +       int     semval;         /* current value */
> +       /*
> +        * PID of the process that last modified the semaphore. For
> +        * Linux, specifically these are:
> +        *  - semop
> +        *  - semctl, via SETVAL and SETALL.
> +        *  - at task exit when performing undo adjustments (see exit_sem).
> +        */
> +       int     sempid;
> +       spinlock_t      lock;   /* spinlock for fine-grained semtimedop */
> +       struct list_head pending_alter; /* pending single-sop operations */
> +                                       /* that alter the semaphore */
> +       struct list_head pending_const; /* pending single-sop operations */
> +                                       /* that do not alter the semaphore*/
> +       time_t  sem_otime;      /* candidate for sem_otime */
> +} ____cacheline_aligned_in_smp;
> +
>  /* One sem_array data structure for each set of semaphores in the system. */
>  struct sem_array {
>         struct kern_ipc_perm    sem_perm;       /* permissions .. see ipc.h */
>         time_t                  sem_ctime;      /* last change time */
> -       struct sem              *sem_base;      /* ptr to first semaphore in array */
>         struct list_head        pending_alter;  /* pending operations */
>                                                 /* that alter the array */
>         struct list_head        pending_const;  /* pending complex operations */
> @@ -21,6 +39,8 @@ struct sem_array {
>         int                     sem_nsems;      /* no. of semaphores in array */
>         int                     complex_count;  /* pending complex operations */
>         unsigned int            use_global_lock;/* >0: global lock required */
> +
> +       struct sem              sems[];
>  };
>
>  #ifdef CONFIG_SYSVIPC
> diff --git a/ipc/sem.c b/ipc/sem.c
> index 947dc23..fff8337 100644
> --- a/ipc/sem.c
> +++ b/ipc/sem.c
> @@ -87,24 +87,6 @@
>  #include <linux/uaccess.h>
>  #include "util.h"
>
> -/* One semaphore structure for each semaphore in the system. */
> -struct sem {
> -       int     semval;         /* current value */
> -       /*
> -        * PID of the process that last modified the semaphore. For
> -        * Linux, specifically these are:
> -        *  - semop
> -        *  - semctl, via SETVAL and SETALL.
> -        *  - at task exit when performing undo adjustments (see exit_sem).
> -        */
> -       int     sempid;
> -       spinlock_t      lock;   /* spinlock for fine-grained semtimedop */
> -       struct list_head pending_alter; /* pending single-sop operations */
> -                                       /* that alter the semaphore */
> -       struct list_head pending_const; /* pending single-sop operations */
> -                                       /* that do not alter the semaphore*/
> -       time_t  sem_otime;      /* candidate for sem_otime */
> -} ____cacheline_aligned_in_smp;
>
>  /* One queue for each sleeping process in the system. */
>  struct sem_queue {
> @@ -175,7 +157,7 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
>   *     sem_array.sem_undo
>   *
>   * b) global or semaphore sem_lock() for read/write:
> - *     sem_array.sem_base[i].pending_{const,alter}:
> + *     sem_array.sems[i].pending_{const,alter}:
>   *
>   * c) special:
>   *     sem_undo_list.list_proc:
> @@ -250,7 +232,7 @@ static void unmerge_queues(struct sem_array *sma)
>          */
>         list_for_each_entry_safe(q, tq, &sma->pending_alter, list) {
>                 struct sem *curr;
> -               curr = &sma->sem_base[q->sops[0].sem_num];
> +               curr = &sma->sems[q->sops[0].sem_num];
>
>                 list_add_tail(&q->list, &curr->pending_alter);
>         }
> @@ -270,7 +252,7 @@ static void merge_queues(struct sem_array *sma)
>  {
>         int i;
>         for (i = 0; i < sma->sem_nsems; i++) {
> -               struct sem *sem = sma->sem_base + i;
> +               struct sem *sem = &sma->sems[i];
>
>                 list_splice_init(&sem->pending_alter, &sma->pending_alter);
>         }
> @@ -306,7 +288,7 @@ static void complexmode_enter(struct sem_array *sma)
>         sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS;
>
>         for (i = 0; i < sma->sem_nsems; i++) {
> -               sem = sma->sem_base + i;
> +               sem = &sma->sems[i];
>                 spin_lock(&sem->lock);
>                 spin_unlock(&sem->lock);
>         }
> @@ -366,7 +348,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
>          *
>          * Both facts are tracked by use_global_mode.
>          */
> -       sem = sma->sem_base + sops->sem_num;
> +       sem = &sma->sems[sops->sem_num];
>
>         /*
>          * Initial check for use_global_lock. Just an optimization,
> @@ -421,7 +403,7 @@ static inline void sem_unlock(struct sem_array *sma, int locknum)
>                 complexmode_tryleave(sma);
>                 ipc_unlock_object(&sma->sem_perm);
>         } else {
> -               struct sem *sem = sma->sem_base + locknum;
> +               struct sem *sem = &sma->sems[locknum];
>                 spin_unlock(&sem->lock);
>         }
>  }
> @@ -487,7 +469,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
>         if (ns->used_sems + nsems > ns->sc_semmns)
>                 return -ENOSPC;
>
> -       size = sizeof(*sma) + nsems * sizeof(struct sem);
> +       size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
>         sma = ipc_rcu_alloc(size);
>         if (!sma)
>                 return -ENOMEM;
> @@ -504,12 +486,10 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
>                 return retval;
>         }
>
> -       sma->sem_base = (struct sem *) &sma[1];
> -
>         for (i = 0; i < nsems; i++) {
> -               INIT_LIST_HEAD(&sma->sem_base[i].pending_alter);
> -               INIT_LIST_HEAD(&sma->sem_base[i].pending_const);
> -               spin_lock_init(&sma->sem_base[i].lock);
> +               INIT_LIST_HEAD(&sma->sems[i].pending_alter);
> +               INIT_LIST_HEAD(&sma->sems[i].pending_const);
> +               spin_lock_init(&sma->sems[i].lock);
>         }
>
>         sma->complex_count = 0;
> @@ -612,7 +592,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
>         un = q->undo;
>
>         for (sop = sops; sop < sops + nsops; sop++) {
> -               curr = sma->sem_base + sop->sem_num;
> +               curr = &sma->sems[sop->sem_num];
>                 sem_op = sop->sem_op;
>                 result = curr->semval;
>
> @@ -639,7 +619,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
>         sop--;
>         pid = q->pid;
>         while (sop >= sops) {
> -               sma->sem_base[sop->sem_num].sempid = pid;
> +               sma->sems[sop->sem_num].sempid = pid;
>                 sop--;
>         }
>
> @@ -661,7 +641,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
>         sop--;
>         while (sop >= sops) {
>                 sem_op = sop->sem_op;
> -               sma->sem_base[sop->sem_num].semval -= sem_op;
> +               sma->sems[sop->sem_num].semval -= sem_op;
>                 if (sop->sem_flg & SEM_UNDO)
>                         un->semadj[sop->sem_num] += sem_op;
>                 sop--;
> @@ -692,7 +672,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
>          * until the operations can go through.
>          */
>         for (sop = sops; sop < sops + nsops; sop++) {
> -               curr = sma->sem_base + sop->sem_num;
> +               curr = &sma->sems[sop->sem_num];
>                 sem_op = sop->sem_op;
>                 result = curr->semval;
>
> @@ -716,7 +696,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
>         }
>
>         for (sop = sops; sop < sops + nsops; sop++) {
> -               curr = sma->sem_base + sop->sem_num;
> +               curr = &sma->sems[sop->sem_num];
>                 sem_op = sop->sem_op;
>                 result = curr->semval;
>
> @@ -815,7 +795,7 @@ static int wake_const_ops(struct sem_array *sma, int semnum,
>         if (semnum == -1)
>                 pending_list = &sma->pending_const;
>         else
> -               pending_list = &sma->sem_base[semnum].pending_const;
> +               pending_list = &sma->sems[semnum].pending_const;
>
>         list_for_each_entry_safe(q, tmp, pending_list, list) {
>                 int error = perform_atomic_semop(sma, q);
> @@ -856,7 +836,7 @@ static int do_smart_wakeup_zero(struct sem_array *sma, struct sembuf *sops,
>                 for (i = 0; i < nsops; i++) {
>                         int num = sops[i].sem_num;
>
> -                       if (sma->sem_base[num].semval == 0) {
> +                       if (sma->sems[num].semval == 0) {
>                                 got_zero = 1;
>                                 semop_completed |= wake_const_ops(sma, num, wake_q);
>                         }
> @@ -867,7 +847,7 @@ static int do_smart_wakeup_zero(struct sem_array *sma, struct sembuf *sops,
>                  * Assume all were changed.
>                  */
>                 for (i = 0; i < sma->sem_nsems; i++) {
> -                       if (sma->sem_base[i].semval == 0) {
> +                       if (sma->sems[i].semval == 0) {
>                                 got_zero = 1;
>                                 semop_completed |= wake_const_ops(sma, i, wake_q);
>                         }
> @@ -909,7 +889,7 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
>         if (semnum == -1)
>                 pending_list = &sma->pending_alter;
>         else
> -               pending_list = &sma->sem_base[semnum].pending_alter;
> +               pending_list = &sma->sems[semnum].pending_alter;
>
>  again:
>         list_for_each_entry_safe(q, tmp, pending_list, list) {
> @@ -922,7 +902,7 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
>                  * be in the  per semaphore pending queue, and decrements
>                  * cannot be successful if the value is already 0.
>                  */
> -               if (semnum != -1 && sma->sem_base[semnum].semval == 0)
> +               if (semnum != -1 && sma->sems[semnum].semval == 0)
>                         break;
>
>                 error = perform_atomic_semop(sma, q);
> @@ -959,9 +939,9 @@ static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *w
>  static void set_semotime(struct sem_array *sma, struct sembuf *sops)
>  {
>         if (sops == NULL) {
> -               sma->sem_base[0].sem_otime = get_seconds();
> +               sma->sems[0].sem_otime = get_seconds();
>         } else {
> -               sma->sem_base[sops[0].sem_num].sem_otime =
> +               sma->sems[sops[0].sem_num].sem_otime =
>                                                         get_seconds();
>         }
>  }
> @@ -1067,9 +1047,9 @@ static int count_semcnt(struct sem_array *sma, ushort semnum,
>         semcnt = 0;
>         /* First: check the simple operations. They are easy to evaluate */
>         if (count_zero)
> -               l = &sma->sem_base[semnum].pending_const;
> +               l = &sma->sems[semnum].pending_const;
>         else
> -               l = &sma->sem_base[semnum].pending_alter;
> +               l = &sma->sems[semnum].pending_alter;
>
>         list_for_each_entry(q, l, list) {
>                 /* all task on a per-semaphore list sleep on exactly
> @@ -1124,7 +1104,7 @@ static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
>                 wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
>         }
>         for (i = 0; i < sma->sem_nsems; i++) {
> -               struct sem *sem = sma->sem_base + i;
> +               struct sem *sem = &sma->sems[i];
>                 list_for_each_entry_safe(q, tq, &sem->pending_const, list) {
>                         unlink_queue(sma, q);
>                         wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
> @@ -1174,9 +1154,9 @@ static time_t get_semotime(struct sem_array *sma)
>         int i;
>         time_t res;
>
> -       res = sma->sem_base[0].sem_otime;
> +       res = sma->sems[0].sem_otime;
>         for (i = 1; i < sma->sem_nsems; i++) {
> -               time_t to = sma->sem_base[i].sem_otime;
> +               time_t to = sma->sems[i].sem_otime;
>
>                 if (to > res)
>                         res = to;
> @@ -1325,7 +1305,7 @@ static int semctl_setval(struct ipc_namespace *ns, int semid, int semnum,
>                 return -EIDRM;
>         }
>
> -       curr = &sma->sem_base[semnum];
> +       curr = &sma->sems[semnum];
>
>         ipc_assert_locked_object(&sma->sem_perm);
>         list_for_each_entry(un, &sma->list_id, list_id)
> @@ -1402,7 +1382,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
>                         }
>                 }
>                 for (i = 0; i < sma->sem_nsems; i++)
> -                       sem_io[i] = sma->sem_base[i].semval;
> +                       sem_io[i] = sma->sems[i].semval;
>                 sem_unlock(sma, -1);
>                 rcu_read_unlock();
>                 err = 0;
> @@ -1450,8 +1430,8 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
>                 }
>
>                 for (i = 0; i < nsems; i++) {
> -                       sma->sem_base[i].semval = sem_io[i];
> -                       sma->sem_base[i].sempid = task_tgid_vnr(current);
> +                       sma->sems[i].semval = sem_io[i];
> +                       sma->sems[i].sempid = task_tgid_vnr(current);
>                 }
>
>                 ipc_assert_locked_object(&sma->sem_perm);
> @@ -1476,7 +1456,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
>                 err = -EIDRM;
>                 goto out_unlock;
>         }
> -       curr = &sma->sem_base[semnum];
> +       curr = &sma->sems[semnum];
>
>         switch (cmd) {
>         case GETVAL:
> @@ -1932,7 +1912,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
>          */
>         if (nsops == 1) {
>                 struct sem *curr;
> -               curr = &sma->sem_base[sops->sem_num];
> +               curr = &sma->sems[sops->sem_num];
>
>                 if (alter) {
>                         if (sma->complex_count) {
> @@ -2146,7 +2126,7 @@ void exit_sem(struct task_struct *tsk)
>
>                 /* perform adjustments registered in un */
>                 for (i = 0; i < sma->sem_nsems; i++) {
> -                       struct sem *semaphore = &sma->sem_base[i];
> +                       struct sem *semaphore = &sma->sems[i];
>                         if (un->semadj[i]) {
>                                 semaphore->semval += un->semadj[i];
>                                 /*
> --
> 2.9.3
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/20 V3] Misc cleanups for ipc
  2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
                     ` (19 preceding siblings ...)
  2017-05-25 18:51   ` [PATCH 20/20] ipc/util.h: Update documentation for ipc_getref() and ipc_putref() Manfred Spraul
@ 2017-05-25 19:45   ` Kees Cook
  2017-05-26  1:56     ` Manfred Spraul
  20 siblings, 1 reply; 38+ messages in thread
From: Kees Cook @ 2017-05-25 19:45 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: Michael Kerrisk, Andrew Morton, LKML, 1vier1, Davidlohr Bueso,
	Ingo Molnar, Peter Zijlstra, Fabian Frederick

On Thu, May 25, 2017 at 11:50 AM, Manfred Spraul
<manfred@colorfullife.com> wrote:
> Hi all,
>
> Updated series. The series got longer, because I merged all patches
> from Kees.
>
> Main changes are:
> - sems[] instead of sem[0].
> - Immediately use BUILD_BUG_ON()
> - Immediately move the memset() to avoid crashing with SEM_UNDO.
> - Use rcu for every security_xx_free(), even if ipc_addid() was not
>   successful

This all looks great (excepting the accidental hunk migration). I like
the change to use call_rcu(), though I still think it's overkill. ;)

Thanks for integrating my changes!

-Kees

> @Andrew: Could you add them again to your tree?
>
> @Michael:
> Should we update man semctl(2)?
> Several years ago, I did a review and found that sem_ctime is only
> for Coherent the time of the last change...
>
> http://calculix-rpm.sourceforge.net/sysvsem.html
>
> --
>         Manfred



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/20 V3] Misc cleanups for ipc
  2017-05-25 19:45   ` [PATCH 0/20 V3] Misc cleanups for ipc Kees Cook
@ 2017-05-26  1:56     ` Manfred Spraul
  0 siblings, 0 replies; 38+ messages in thread
From: Manfred Spraul @ 2017-05-26  1:56 UTC (permalink / raw)
  To: Kees Cook
  Cc: Michael Kerrisk, Andrew Morton, LKML, 1vier1, Davidlohr Bueso,
	Ingo Molnar, Peter Zijlstra, Fabian Frederick

Hi Kees,

On 05/25/2017 09:45 PM, Kees Cook wrote:
> On Thu, May 25, 2017 at 11:50 AM, Manfred Spraul
> <manfred@colorfullife.com> wrote:
>> Hi all,
>>
>> Updated series. The series got longer, because I merged all patches
>> from Kees.
>>
>> Main changes are:
>> - sems[] instead of sem[0].
>> - Immediately use BUILD_BUG_ON()
>> - Immediately move the memset() to avoid crashing with SEM_UNDO.
>> - Use rcu for every security_xx_free(), even if ipc_addid() was not
>>    successful
> This all looks great (excepting the accidental hunk migration). I like
> the change to use call_rcu(), though I still think it's overkill. ;)
I still don't see the hunk migration.

What I wanted to do was to merge a fix for the wrong memset into
"[PATCH 02/20] ipc: merge ipc_rcu and kern_ipc_perm"

--
     Manfred

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 02/20] ipc: merge ipc_rcu and kern_ipc_perm
  2017-05-25 19:34     ` Kees Cook
@ 2017-05-26  3:37       ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2017-05-26  3:37 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: Michael Kerrisk, Andrew Morton, LKML, 1vier1, Davidlohr Bueso,
	Ingo Molnar, Peter Zijlstra, Fabian Frederick

On Thu, May 25, 2017 at 12:34 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, May 25, 2017 at 11:50 AM, Manfred Spraul
> <manfred@colorfullife.com> wrote:
>> ipc has two management structures that exist for every id:
>> - struct kern_ipc_perm, it contains e.g. the permissions.
>> - struct ipc_rcu, it contains the rcu head for rcu handling and
>>   the refcount.
>>
>> The patch merges both structures.
>> As a bonus, we may save one cacheline, because both structures are
>> cacheline aligned.
>> In addition, it reduces the number of casts, instead most codepaths can
>> use container_of.
>>
>> To simplify code, the ipc_rcu_alloc initializes the allocation to 0.
>
> I don't see this change in the code, only the removal of sem's
> memset(..., 0, ...)?
>
>> diff --git a/ipc/sem.c b/ipc/sem.c
>> index fff8337..bdff6d9 100644
>> --- a/ipc/sem.c
>> +++ b/ipc/sem.c
>> @@ -469,20 +469,20 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
>>         if (ns->used_sems + nsems > ns->sc_semmns)
>>                 return -ENOSPC;
>>
>> +       BUILD_BUG_ON(offsetof(struct sem_array, sem_perm) != 0);
>> +
>>         size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
>> -       sma = ipc_rcu_alloc(size);
>> +       sma = container_of(ipc_rcu_alloc(size), struct sem_array, sem_perm);
>>         if (!sma)
>>                 return -ENOMEM;
>>
>> -       memset(sma, 0, size);
>> -
>>         sma->sem_perm.mode = (semflg & S_IRWXUGO);
>>         sma->sem_perm.key = key;
>>
>>         sma->sem_perm.security = NULL;
>>         retval = security_sem_alloc(sma);
>>         if (retval) {
>> -               ipc_rcu_putref(sma, ipc_rcu_free);
>> +               ipc_rcu_putref(&sma->sem_perm, ipc_rcu_free);
>>                 return retval;
>>         }
>>
>> [...]
>> diff --git a/ipc/util.c b/ipc/util.c
>> index caec7b1..9dcc08b 100644
>> --- a/ipc/util.c
>> +++ b/ipc/util.c
>> @@ -418,46 +418,43 @@ void ipc_free(void *ptr)
>>  }
>>
>>  /**
>> - * ipc_rcu_alloc - allocate ipc and rcu space
>> + * ipc_rcu_alloc - allocate ipc space
>>   * @size: size desired
>>   *
>> - * Allocate memory for the rcu header structure +  the object.
>> - * Returns the pointer to the object or NULL upon failure.
>> + * Allocate memory for an ipc object.
>> + * The first member must be struct kern_ipc_perm.
>>   */
>> -void *ipc_rcu_alloc(int size)
>> +struct kern_ipc_perm *ipc_rcu_alloc(int size)
>>  {
>>         /*
>>          * We prepend the allocation with the rcu struct
>>          */
>> -       struct ipc_rcu *out = ipc_alloc(sizeof(struct ipc_rcu) + size);
>> +       struct kern_ipc_perm *out = ipc_alloc(size);
>>         if (unlikely(!out))
>>                 return NULL;
>>         atomic_set(&out->refcount, 1);
>> -       return out + 1;
>> +       return out;
>>  }

See above:

- newary() loses memset()
- ipc_rcu_alloc() does not gain it
- changelog says "To simplify code, the ipc_rcu_alloc initializes the
allocation to 0." This is what's wanted but the patch doesn't do it.

The actual change that (temporarily) adds memset to ipc_rcu_alloc() is
one of the following patches, but it should be here or bisection may
cause unexpected results for semaphores.

It's a minor issue, since these will likely all land at the same time,
but it's probably still worth fixing.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2017-05-26  3:37 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-08 22:23 [PATCH] ipc/sem: Avoid indexing past end of sem_array Kees Cook
2017-05-14 13:54 ` Manfred Spraul
2017-05-15 17:40   ` Kees Cook
2017-05-15 17:19 ` [PATCH 0/2] Misc cleanups for ipc Manfred Spraul
2017-05-15 17:19   ` [PATCH 1/3] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
2017-05-15 20:08     ` Andrew Morton
2017-05-15 22:16       ` Kees Cook
2017-05-16  5:46     ` Christoph Hellwig
2017-05-15 17:19   ` [PATCH 2/3] ipc: merge ipc_rcu and kern_ipc_perm Manfred Spraul
2017-05-16  0:03     ` Kees Cook
2017-05-15 17:19   ` [PATCH 3/3] include/linux/sem.h: Correctly document sem_ctime Manfred Spraul
2017-05-25 18:50 ` [PATCH 0/20 V3] Misc cleanups for ipc Manfred Spraul
2017-05-25 18:50   ` [PATCH 01/20] ipc/sem.c: remove sem_base, embed struct sem Manfred Spraul
2017-05-25 19:43     ` Kees Cook
2017-05-25 18:50   ` [PATCH 02/20] ipc: merge ipc_rcu and kern_ipc_perm Manfred Spraul
2017-05-25 19:34     ` Kees Cook
2017-05-26  3:37       ` Kees Cook
2017-05-25 18:50   ` [PATCH 03/20] include/linux/sem.h: Correctly document sem_ctime Manfred Spraul
2017-05-25 18:50   ` [PATCH 04/20] ipc: Drop non-RCU allocation Manfred Spraul
2017-05-25 19:35     ` Kees Cook
2017-05-25 18:50   ` [PATCH 05/20] ipc/sem: Do not use ipc_rcu_free() Manfred Spraul
2017-05-25 18:50   ` [PATCH 06/20] ipc/shm: " Manfred Spraul
2017-05-25 18:50   ` [PATCH 07/20] ipc/msg: " Manfred Spraul
2017-05-25 18:50   ` [PATCH 08/20] ipc/util: Drop ipc_rcu_free() Manfred Spraul
2017-05-25 18:50   ` [PATCH 09/20] ipc/sem: Avoid ipc_rcu_alloc() Manfred Spraul
2017-05-25 18:50   ` [PATCH 10/20] ipc/shm: " Manfred Spraul
2017-05-25 18:50   ` [PATCH 11/20] ipc/msg: " Manfred Spraul
2017-05-25 18:50   ` [PATCH 12/20] ipc/util: Drop ipc_rcu_alloc() Manfred Spraul
2017-05-25 18:51   ` [PATCH 13/20] ipc/sem.c: Avoid ipc_rcu_putref for failed ipc_addid() Manfred Spraul
2017-05-25 18:51   ` [PATCH 14/20] ipc/shm.c: " Manfred Spraul
2017-05-25 18:51   ` [PATCH 15/20] ipc/msg.c: " Manfred Spraul
2017-05-25 18:51   ` [PATCH 16/20] ipc: Move atomic_set() to where it is needed Manfred Spraul
2017-05-25 18:51   ` [PATCH 17/20] ipc/shm: Remove special shm_alloc/free Manfred Spraul
2017-05-25 18:51   ` [PATCH 18/20] ipc/msg: Remove special msg_alloc/free Manfred Spraul
2017-05-25 18:51   ` [PATCH 19/20] ipc/sem: Drop __sem_free() Manfred Spraul
2017-05-25 18:51   ` [PATCH 20/20] ipc/util.h: Update documentation for ipc_getref() and ipc_putref() Manfred Spraul
2017-05-25 19:45   ` [PATCH 0/20 V3] Misc cleanups for ipc Kees Cook
2017-05-26  1:56     ` Manfred Spraul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).