linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/37] fine-grained locking in binder driver
@ 2017-06-29 19:01 Todd Kjos
  2017-06-29 19:01 ` [PATCH 01/37] Revert "android: binder: Sanity check at binder ioctl" Todd Kjos
                   ` (37 more replies)
  0 siblings, 38 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

The binder driver uses a global mutex to serialize access to state in a
multi-threaded environment. This global lock has been increasingly
problematic as Android devices have scaled to more cores. The problem is
not so much contention for the global lock which still remains relatively
low, but the priority inversion which occurs regularly when a lower
priority thread is preempted while holding the lock and a higher priority
thread becomes blocked on it. These cases can be especially painful if the
lower priority thread runs in the background on a slow core at a low
frequency. This often manifests as missed frames or other glitches.

For several years, a hacky solution has been used in many Android devices
which disables preemption for most of the time the global mutex is held.
This dramatically decreased the cases of glitches induced by priority
inversion and increased the average throughput for binder transactions.

Moving to fine-grained locking in this patchset results is a cleaner
and more scalable solution than the preempt disable hack. Priority
inversion is decreased significantly.

Here is a comparison of the binder throughputs for the 3 cases
with no payload (using binderThroughputTest on a 4-core Pixel device):

1 Client/Server Pair (iterations/s):
Global Mutex:	 4267
+ No-Preempt:	69688
Fine-Grained:	52313

2 Client/Server Pairs (iterations/s):
Global Mutex:	  5608
+ No-Preempt:	111346
Fine-Grained:	117039

4 Client/Server Pairs (iterations/s):
Global Mutex:	 12839
+ No-Preempt:	118049
Fine-Grained:	189805

8 Client/Server Pairs (iterations/s):
Global Mutex:	 12991
+ No-Preempt:	111780
Fine-Grained:	203607

16 Client/Server Pairs (iterations/s):
Global Mutex:	 14467
+ No-Preempt:	106763
Fine-Grained:	202942

Note that global lock performance without preempt disable seems to perform
significantly worse on Pixel than on some other devices. This run used the
4.4 version of the binder driver that is currently upstream (and there
have been few lines changed since then which wouldn't explain the poor
performance).

The no-preempt version has better throughput in the single threaded case
where the new locking overhead adds to the transacton latency. However
with multiple concurent transactions, the lack of contention results in
better throughput for the fine-grained case.

In the patchset, the binder allocator is moved to a separate file and
protected with its own per-process mutex.

Most of the binder driver is now protected by 3 spinlocks which must be
acquired in the order shown:
1) proc->outer_lock : protects binder_ref binder_proc_lock() and
   binder_proc_unlock() are used to acq/rel.
2) node->lock : protects most fields of binder_node.  binder_node_lock()
   and binder_node_unlock() are used to acq/rel
3) proc->inner_lock : protects the thread and node lists (proc->threads,
   proc->waiting_threads, proc->nodes) and all todo lists associated with
   the binder_proc (proc->todo, thread->todo, proc->delivered_death and
   node->async_todo), as well as thread->transaction_stack
   binder_inner_proc_lock() and binder_inner_proc_unlock() are used
   to acq/rel

Any lock under procA must never be nested under any lock at the same
level or below on procB.

There was significant refactoring needed to implement the locking so there
are 37 patches in the set.

Here are the patches grouped into 4 categories:

1) bugfixes: 3 patches to fix behavior and are
   needed for fine-grained locking implementation
	Revert "binder: Sanity check at binder ioctl"
	  - note: introduces kernel race to fix userspace bug. An
	          attempt to fix this was submitted in
		  "[PATCH v2] android: binder: fix dangling pointer comparison"
		  however that discussion concluded that this
		  patch should be reverted and the problem fixed
		  in userspace. Doing the revert now since this patch
		  conflicts with some of the fine-grained locking
		  patches.
	binder: use group leader instead of open thread
	binder: Use wake up hint for synchronous transactions.

2) Separate binder allocator into a separate file from binder driver
	binder: separate binder allocator structure from binder proc
	binder: remove unneeded cleanup code
	binder: separate out binder_alloc functions
	binder: move binder_alloc to separate file

3) Refactor binder driver to support locking
	binder: remove binder_debug_no_lock mechanism
	binder: add protection for non-perf cases
	binder: change binder_stats to atomics
	binder: make binder_last_id an atomic
	binder: add log information for binder transaction failures
	binder: refactor queue management in binder_thread_read
	binder: avoid race conditions when enqueuing txn 
	binder: don't modify thread->looper from other threads
	binder: remove dead code in binder_get_ref_for_node
	binder: protect against two threads freeing buffer
	binder: add more debug info when allocation fails.
	binder: use atomic for transaction_log index
	binder: refactor binder_pop_transaction
	binder: guarantee txn complete / errors delivered in-order
	binder: make sure target_node has strong ref 
	binder: make sure accesses to proc/thread are safe
	binder: refactor binder ref inc/dec for thread safety
	binder: use node->tmp_refs to ensure node safety

4) Add the locks and remove the global lock
	binder: introduce locking helper functions
	binder: use inner lock to sync work dq and node counts
	binder: add spinlocks to protect todo lists
	binder: add spinlock to protect binder_node
	binder: protect proc->nodes with inner lock
	binder: protect proc->threads with inner_lock
	binder: protect transaction_stack with inner lock.
	binder: use inner lock to protect thread accounting
	binder: protect binder_ref with outer lock
	binder: protect against stale pointers in print_binder_transaction
	binder: fix death race conditions
	binder: remove global binder lock

drivers/android/Makefile       |    2 +-
drivers/android/binder.c       | 3467 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------
drivers/android/binder_alloc.c |  802 ++++++++++++++++++++++++++++++++
drivers/android/binder_alloc.h |  163 +++++++
drivers/android/binder_trace.h |   41 +-
5 files changed, 3235 insertions(+), 1240 deletions(-)

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 01/37] Revert "android: binder: Sanity check at binder ioctl"
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-07-03  9:17   ` Greg KH
  2017-06-29 19:01 ` [PATCH 02/37] binder: use group leader instead of open thread Todd Kjos
                   ` (36 subsequent siblings)
  37 siblings, 1 reply; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

This reverts commit a906d6931f3ccaf7de805643190765ddd7378e27.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index aae4d8d4be36..157bd3e49ff4 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -3247,10 +3247,6 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 	/*pr_info("binder_ioctl: %d:%d %x %lx\n",
 			proc->pid, current->pid, cmd, arg);*/
 
-	if (unlikely(current->mm != proc->vma_vm_mm)) {
-		pr_err("current mm mismatch proc mm\n");
-		return -EINVAL;
-	}
 	trace_binder_ioctl(cmd, arg);
 
 	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
@@ -3466,7 +3462,6 @@ static int binder_open(struct inode *nodp, struct file *filp)
 		return -ENOMEM;
 	get_task_struct(current);
 	proc->tsk = current;
-	proc->vma_vm_mm = current->mm;
 	INIT_LIST_HEAD(&proc->todo);
 	init_waitqueue_head(&proc->wait);
 	proc->default_priority = task_nice(current);
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 02/37] binder: use group leader instead of open thread
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
  2017-06-29 19:01 ` [PATCH 01/37] Revert "android: binder: Sanity check at binder ioctl" Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-07-03  9:17   ` Greg KH
  2017-07-24 21:00   ` John Stultz
  2017-06-29 19:01 ` [PATCH 03/37] binder: Use wake up hint for synchronous transactions Todd Kjos
                   ` (35 subsequent siblings)
  37 siblings, 2 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

The binder allocator assumes that the thread that
called binder_open will never die for the lifetime of
that proc. That thread is normally the group_leader,
however it may not be. Use the group_leader instead
of current.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 157bd3e49ff4..9393924ae8e8 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -3460,8 +3460,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
 	proc = kzalloc(sizeof(*proc), GFP_KERNEL);
 	if (proc == NULL)
 		return -ENOMEM;
-	get_task_struct(current);
-	proc->tsk = current;
+	get_task_struct(current->group_leader);
+	proc->tsk = current->group_leader;
 	INIT_LIST_HEAD(&proc->todo);
 	init_waitqueue_head(&proc->wait);
 	proc->default_priority = task_nice(current);
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 03/37] binder: Use wake up hint for synchronous transactions.
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
  2017-06-29 19:01 ` [PATCH 01/37] Revert "android: binder: Sanity check at binder ioctl" Todd Kjos
  2017-06-29 19:01 ` [PATCH 02/37] binder: use group leader instead of open thread Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-07-03  9:18   ` Greg KH
  2017-06-29 19:01 ` [PATCH 04/37] binder: separate binder allocator structure from binder proc Todd Kjos
                   ` (34 subsequent siblings)
  37 siblings, 1 reply; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos
  Cc: Riley Andrews, Omprakash Dhyade

From: Riley Andrews <riandrews@google.com>

Use wake_up_interruptible_sync() to hint to the scheduler binder
transactions are synchronous wakeups. Disable preemption while waking
to avoid ping-ponging on the binder lock.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Omprakash Dhyade <odhyade@codeaurora.org>
---
 drivers/android/binder.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 9393924ae8e8..f7665c31feca 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -2200,8 +2200,12 @@ static void binder_transaction(struct binder_proc *proc,
 	list_add_tail(&t->work.entry, target_list);
 	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
 	list_add_tail(&tcomplete->entry, &thread->todo);
-	if (target_wait)
-		wake_up_interruptible(target_wait);
+	if (target_wait) {
+		if (reply || !(t->flags & TF_ONE_WAY))
+			wake_up_interruptible_sync(target_wait);
+		else
+			wake_up_interruptible(target_wait);
+	}
 	return;
 
 err_translate_failed:
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 04/37] binder: separate binder allocator structure from binder proc
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (2 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 03/37] binder: Use wake up hint for synchronous transactions Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 05/37] binder: remove unneeded cleanup code Todd Kjos
                   ` (33 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

The binder allocator is logically separate from the rest
of the binder drivers. Separating the data structures
to prepare for splitting into separate file with separate
locking.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c       | 212 +++++++++++++++++++++++++----------------
 drivers/android/binder_trace.h |   2 +-
 2 files changed, 129 insertions(+), 85 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index f7665c31feca..1097d056ea6b 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -319,6 +319,41 @@ enum binder_deferred_state {
 	BINDER_DEFERRED_RELEASE      = 0x04,
 };
 
+/**
+ * struct binder_alloc - per-binder proc state for binder allocator
+ * @vma:               vm_area_struct passed to mmap_handler
+ *                     (invarient after mmap)
+ * @vma_vm_mm:         copy of vma->vm_mm (invarient after mmap)
+ * @buffer:            base of per-proc address space mapped via mmap
+ * @user_buffer_offset: offset between user and kernel VAs for buffer
+ * @buffers:           list of all buffers for this proc
+ * @free_buffers:      rb tree of buffers available for allocation
+ *                     sorted by size
+ * @allocated_buffers: rb tree of allocated buffers sorted by address
+ * @free_async_space:  VA space available for async buffers. This is
+ *                     initialized at mmap time to 1/2 the full VA space
+ * @pages:             array of physical page addresses for each page of
+ *                     mmap'd space
+ * @buffer_size:       size of address space (could be less than requested)
+ *
+ * Bookkeeping structure for per-proc address space management for binder
+ * buffers. It is normally initialized during binder_init() and binder_mmap()
+ * calls. The address space is used for both user-visible buffers and for
+ * struct binder_buffer objects used to track the user buffers
+ */
+struct binder_alloc {
+	struct vm_area_struct *vma;
+	struct mm_struct *vma_vm_mm;
+	void *buffer;
+	ptrdiff_t user_buffer_offset;
+	struct list_head buffers;
+	struct rb_root free_buffers;
+	struct rb_root allocated_buffers;
+	size_t free_async_space;
+	struct page **pages;
+	size_t buffer_size;
+};
+
 struct binder_proc {
 	struct hlist_node proc_node;
 	struct rb_root threads;
@@ -326,23 +361,11 @@ struct binder_proc {
 	struct rb_root refs_by_desc;
 	struct rb_root refs_by_node;
 	int pid;
-	struct vm_area_struct *vma;
-	struct mm_struct *vma_vm_mm;
 	struct task_struct *tsk;
 	struct files_struct *files;
 	struct hlist_node deferred_work_node;
 	int deferred_work;
-	void *buffer;
-	ptrdiff_t user_buffer_offset;
 
-	struct list_head buffers;
-	struct rb_root free_buffers;
-	struct rb_root allocated_buffers;
-	size_t free_async_space;
-
-	struct page **pages;
-	size_t buffer_size;
-	uint32_t buffer_free;
 	struct list_head todo;
 	wait_queue_head_t wait;
 	struct binder_stats stats;
@@ -353,6 +376,7 @@ struct binder_proc {
 	int ready_threads;
 	long default_priority;
 	struct dentry *debugfs_entry;
+	struct binder_alloc alloc;
 	struct binder_context *context;
 };
 
@@ -485,8 +509,10 @@ static void binder_set_nice(long nice)
 static size_t binder_buffer_size(struct binder_proc *proc,
 				 struct binder_buffer *buffer)
 {
-	if (list_is_last(&buffer->entry, &proc->buffers))
-		return proc->buffer + proc->buffer_size - (void *)buffer->data;
+	if (list_is_last(&buffer->entry, &proc->alloc.buffers))
+		return proc->alloc.buffer +
+				proc->alloc.buffer_size -
+				(void *)buffer->data;
 	return (size_t)list_entry(buffer->entry.next,
 			  struct binder_buffer, entry) - (size_t)buffer->data;
 }
@@ -494,7 +520,7 @@ static size_t binder_buffer_size(struct binder_proc *proc,
 static void binder_insert_free_buffer(struct binder_proc *proc,
 				      struct binder_buffer *new_buffer)
 {
-	struct rb_node **p = &proc->free_buffers.rb_node;
+	struct rb_node **p = &proc->alloc.free_buffers.rb_node;
 	struct rb_node *parent = NULL;
 	struct binder_buffer *buffer;
 	size_t buffer_size;
@@ -521,13 +547,13 @@ static void binder_insert_free_buffer(struct binder_proc *proc,
 			p = &parent->rb_right;
 	}
 	rb_link_node(&new_buffer->rb_node, parent, p);
-	rb_insert_color(&new_buffer->rb_node, &proc->free_buffers);
+	rb_insert_color(&new_buffer->rb_node, &proc->alloc.free_buffers);
 }
 
 static void binder_insert_allocated_buffer(struct binder_proc *proc,
 					   struct binder_buffer *new_buffer)
 {
-	struct rb_node **p = &proc->allocated_buffers.rb_node;
+	struct rb_node **p = &proc->alloc.allocated_buffers.rb_node;
 	struct rb_node *parent = NULL;
 	struct binder_buffer *buffer;
 
@@ -546,18 +572,19 @@ static void binder_insert_allocated_buffer(struct binder_proc *proc,
 			BUG();
 	}
 	rb_link_node(&new_buffer->rb_node, parent, p);
-	rb_insert_color(&new_buffer->rb_node, &proc->allocated_buffers);
+	rb_insert_color(&new_buffer->rb_node, &proc->alloc.allocated_buffers);
 }
 
 static struct binder_buffer *binder_buffer_lookup(struct binder_proc *proc,
 						  uintptr_t user_ptr)
 {
-	struct rb_node *n = proc->allocated_buffers.rb_node;
+	struct rb_node *n = proc->alloc.allocated_buffers.rb_node;
 	struct binder_buffer *buffer;
 	struct binder_buffer *kern_ptr;
 
-	kern_ptr = (struct binder_buffer *)(user_ptr - proc->user_buffer_offset
-		- offsetof(struct binder_buffer, data));
+	kern_ptr = (struct binder_buffer *)
+		(user_ptr - proc->alloc.user_buffer_offset -
+			offsetof(struct binder_buffer, data));
 
 	while (n) {
 		buffer = rb_entry(n, struct binder_buffer, rb_node);
@@ -598,8 +625,8 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
 
 	if (mm) {
 		down_write(&mm->mmap_sem);
-		vma = proc->vma;
-		if (vma && mm != proc->vma_vm_mm) {
+		vma = proc->alloc.vma;
+		if (vma && mm != proc->alloc.vma_vm_mm) {
 			pr_err("%d: vma mm and task mm mismatch\n",
 				proc->pid);
 			vma = NULL;
@@ -618,7 +645,8 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
 	for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
 		int ret;
 
-		page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];
+		page = &proc->alloc.pages[
+			(page_addr - proc->alloc.buffer) / PAGE_SIZE];
 
 		BUG_ON(*page);
 		*page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
@@ -637,7 +665,7 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
 			goto err_map_kernel_failed;
 		}
 		user_page_addr =
-			(uintptr_t)page_addr + proc->user_buffer_offset;
+			(uintptr_t)page_addr + proc->alloc.user_buffer_offset;
 		ret = vm_insert_page(vma, user_page_addr, page[0]);
 		if (ret) {
 			pr_err("%d: binder_alloc_buf failed to map page at %lx in userspace\n",
@@ -655,10 +683,11 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
 free_range:
 	for (page_addr = end - PAGE_SIZE; page_addr >= start;
 	     page_addr -= PAGE_SIZE) {
-		page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];
+		page = &proc->alloc.pages[
+			(page_addr - proc->alloc.buffer) / PAGE_SIZE];
 		if (vma)
 			zap_page_range(vma, (uintptr_t)page_addr +
-				proc->user_buffer_offset, PAGE_SIZE);
+				proc->alloc.user_buffer_offset, PAGE_SIZE);
 err_vm_insert_page_failed:
 		unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
 err_map_kernel_failed:
@@ -681,7 +710,7 @@ static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
 					      size_t extra_buffers_size,
 					      int is_async)
 {
-	struct rb_node *n = proc->free_buffers.rb_node;
+	struct rb_node *n = proc->alloc.free_buffers.rb_node;
 	struct binder_buffer *buffer;
 	size_t buffer_size;
 	struct rb_node *best_fit = NULL;
@@ -689,7 +718,7 @@ static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
 	void *end_page_addr;
 	size_t size, data_offsets_size;
 
-	if (proc->vma == NULL) {
+	if (proc->alloc.vma == NULL) {
 		pr_err("%d: binder_alloc_buf, no vma\n",
 		       proc->pid);
 		return NULL;
@@ -710,7 +739,8 @@ static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
 		return NULL;
 	}
 	if (is_async &&
-	    proc->free_async_space < size + sizeof(struct binder_buffer)) {
+	    proc->alloc.free_async_space <
+			size + sizeof(struct binder_buffer)) {
 		binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
 			     "%d: binder_alloc_buf size %zd failed, no async space left\n",
 			      proc->pid, size);
@@ -762,7 +792,7 @@ static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
 	    (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr, NULL))
 		return NULL;
 
-	rb_erase(best_fit, &proc->free_buffers);
+	rb_erase(best_fit, &proc->alloc.free_buffers);
 	buffer->free = 0;
 	binder_insert_allocated_buffer(proc, buffer);
 	if (buffer_size != size) {
@@ -780,10 +810,11 @@ static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
 	buffer->extra_buffers_size = extra_buffers_size;
 	buffer->async_transaction = is_async;
 	if (is_async) {
-		proc->free_async_space -= size + sizeof(struct binder_buffer);
+		proc->alloc.free_async_space -=
+			size + sizeof(struct binder_buffer);
 		binder_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
 			     "%d: binder_alloc_buf size %zd async free %zd\n",
-			      proc->pid, size, proc->free_async_space);
+			      proc->pid, size, proc->alloc.free_async_space);
 	}
 
 	return buffer;
@@ -806,7 +837,7 @@ static void binder_delete_free_buffer(struct binder_proc *proc,
 	int free_page_end = 1;
 	int free_page_start = 1;
 
-	BUG_ON(proc->buffers.next == &buffer->entry);
+	BUG_ON(proc->alloc.buffers.next == &buffer->entry);
 	prev = list_entry(buffer->entry.prev, struct binder_buffer, entry);
 	BUG_ON(!prev->free);
 	if (buffer_end_page(prev) == buffer_start_page(buffer)) {
@@ -818,7 +849,7 @@ static void binder_delete_free_buffer(struct binder_proc *proc,
 			      proc->pid, buffer, prev);
 	}
 
-	if (!list_is_last(&buffer->entry, &proc->buffers)) {
+	if (!list_is_last(&buffer->entry, &proc->alloc.buffers)) {
 		next = list_entry(buffer->entry.next,
 				  struct binder_buffer, entry);
 		if (buffer_start_page(next) == buffer_end_page(buffer)) {
@@ -862,39 +893,40 @@ static void binder_free_buf(struct binder_proc *proc,
 	BUG_ON(buffer->free);
 	BUG_ON(size > buffer_size);
 	BUG_ON(buffer->transaction != NULL);
-	BUG_ON((void *)buffer < proc->buffer);
-	BUG_ON((void *)buffer > proc->buffer + proc->buffer_size);
+	BUG_ON((void *)buffer < proc->alloc.buffer);
+	BUG_ON((void *)buffer > proc->alloc.buffer + proc->alloc.buffer_size);
 
 	if (buffer->async_transaction) {
-		proc->free_async_space += size + sizeof(struct binder_buffer);
+		proc->alloc.free_async_space +=
+			size + sizeof(struct binder_buffer);
 
 		binder_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
 			     "%d: binder_free_buf size %zd async free %zd\n",
-			      proc->pid, size, proc->free_async_space);
+			      proc->pid, size, proc->alloc.free_async_space);
 	}
 
 	binder_update_page_range(proc, 0,
 		(void *)PAGE_ALIGN((uintptr_t)buffer->data),
 		(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK),
 		NULL);
-	rb_erase(&buffer->rb_node, &proc->allocated_buffers);
+	rb_erase(&buffer->rb_node, &proc->alloc.allocated_buffers);
 	buffer->free = 1;
-	if (!list_is_last(&buffer->entry, &proc->buffers)) {
+	if (!list_is_last(&buffer->entry, &proc->alloc.buffers)) {
 		struct binder_buffer *next = list_entry(buffer->entry.next,
 						struct binder_buffer, entry);
 
 		if (next->free) {
-			rb_erase(&next->rb_node, &proc->free_buffers);
+			rb_erase(&next->rb_node, &proc->alloc.free_buffers);
 			binder_delete_free_buffer(proc, next);
 		}
 	}
-	if (proc->buffers.next != &buffer->entry) {
+	if (proc->alloc.buffers.next != &buffer->entry) {
 		struct binder_buffer *prev = list_entry(buffer->entry.prev,
 						struct binder_buffer, entry);
 
 		if (prev->free) {
 			binder_delete_free_buffer(proc, buffer);
-			rb_erase(&prev->rb_node, &proc->free_buffers);
+			rb_erase(&prev->rb_node, &proc->alloc.free_buffers);
 			buffer = prev;
 		}
 	}
@@ -1532,7 +1564,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
 			 * back to kernel address space to access it
 			 */
 			parent_buffer = parent->buffer -
-				proc->user_buffer_offset;
+				proc->alloc.user_buffer_offset;
 
 			fd_buf_size = sizeof(u32) * fda->num_fds;
 			if (fda->num_fds >= SIZE_MAX / sizeof(u32)) {
@@ -1750,7 +1782,7 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
 	 * Since the parent was already fixed up, convert it
 	 * back to the kernel address space to access it
 	 */
-	parent_buffer = parent->buffer - target_proc->user_buffer_offset;
+	parent_buffer = parent->buffer - target_proc->alloc.user_buffer_offset;
 	fd_array = (u32 *)(parent_buffer + fda->parent_offset);
 	if (!IS_ALIGNED((unsigned long)fd_array, sizeof(u32))) {
 		binder_user_error("%d:%d parent offset not aligned correctly.\n",
@@ -1818,7 +1850,7 @@ static int binder_fixup_parent(struct binder_transaction *t,
 		return -EINVAL;
 	}
 	parent_buffer = (u8 *)(parent->buffer -
-			       target_proc->user_buffer_offset);
+			       target_proc->alloc.user_buffer_offset);
 	*(binder_uintptr_t *)(parent_buffer + bp->parent_offset) = bp->buffer;
 
 	return 0;
@@ -2158,7 +2190,7 @@ static void binder_transaction(struct binder_proc *proc,
 			}
 			/* Fixup buffer pointer to target proc address space */
 			bp->buffer = (uintptr_t)sg_bufp +
-				target_proc->user_buffer_offset;
+				target_proc->alloc.user_buffer_offset;
 			sg_bufp += ALIGN(bp->length, sizeof(u64));
 
 			ret = binder_fixup_parent(t, thread, bp, off_start,
@@ -2920,7 +2952,7 @@ static int binder_thread_read(struct binder_proc *proc,
 		tr.offsets_size = t->buffer->offsets_size;
 		tr.data.ptr.buffer = (binder_uintptr_t)(
 					(uintptr_t)t->buffer->data +
-					proc->user_buffer_offset);
+					proc->alloc.user_buffer_offset);
 		tr.data.ptr.offsets = tr.data.ptr.buffer +
 					ALIGN(t->buffer->data_size,
 					    sizeof(void *));
@@ -3338,8 +3370,8 @@ static void binder_vma_close(struct vm_area_struct *vma)
 		     proc->pid, vma->vm_start, vma->vm_end,
 		     (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,
 		     (unsigned long)pgprot_val(vma->vm_page_prot));
-	proc->vma = NULL;
-	proc->vma_vm_mm = NULL;
+	proc->alloc.vma = NULL;
+	proc->alloc.vma_vm_mm = NULL;
 	binder_defer_work(proc, BINDER_DEFERRED_PUT_FILES);
 }
 
@@ -3382,7 +3414,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
 	vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;
 
 	mutex_lock(&binder_mmap_lock);
-	if (proc->buffer) {
+	if (proc->alloc.buffer) {
 		ret = -EBUSY;
 		failure_string = "already mapped";
 		goto err_already_mapped;
@@ -3394,56 +3426,66 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
 		failure_string = "get_vm_area";
 		goto err_get_vm_area_failed;
 	}
-	proc->buffer = area->addr;
-	proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;
+	proc->alloc.buffer = area->addr;
+	proc->alloc.user_buffer_offset =
+		vma->vm_start - (uintptr_t)proc->alloc.buffer;
 	mutex_unlock(&binder_mmap_lock);
 
 #ifdef CONFIG_CPU_CACHE_VIPT
 	if (cache_is_vipt_aliasing()) {
-		while (CACHE_COLOUR((vma->vm_start ^ (uint32_t)proc->buffer))) {
-			pr_info("binder_mmap: %d %lx-%lx maps %p bad alignment\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer);
+		while (CACHE_COLOUR(
+			(vma->vm_start ^ (uint32_t)proc->alloc.buffer))) {
+			pr_info("%s: %d %lx-%lx maps %pK bad alignment\n",
+				__func__,
+				proc->pid, vma->vm_start,
+				vma->vm_end, proc->alloc.buffer);
 			vma->vm_start += PAGE_SIZE;
 		}
 	}
 #endif
-	proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);
-	if (proc->pages == NULL) {
+	proc->alloc.pages =
+		kzalloc(sizeof(proc->alloc.pages[0]) *
+				((vma->vm_end - vma->vm_start) / PAGE_SIZE),
+			GFP_KERNEL);
+	if (proc->alloc.pages == NULL) {
 		ret = -ENOMEM;
 		failure_string = "alloc page array";
 		goto err_alloc_pages_failed;
 	}
-	proc->buffer_size = vma->vm_end - vma->vm_start;
+	proc->alloc.buffer_size = vma->vm_end - vma->vm_start;
 
 	vma->vm_ops = &binder_vm_ops;
 	vma->vm_private_data = proc;
 
-	if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {
+	if (binder_update_page_range(proc, 1, proc->alloc.buffer,
+				     proc->alloc.buffer + PAGE_SIZE, vma)) {
 		ret = -ENOMEM;
 		failure_string = "alloc small buf";
 		goto err_alloc_small_buf_failed;
 	}
-	buffer = proc->buffer;
-	INIT_LIST_HEAD(&proc->buffers);
-	list_add(&buffer->entry, &proc->buffers);
+	buffer = proc->alloc.buffer;
+	INIT_LIST_HEAD(&proc->alloc.buffers);
+	list_add(&buffer->entry, &proc->alloc.buffers);
 	buffer->free = 1;
 	binder_insert_free_buffer(proc, buffer);
-	proc->free_async_space = proc->buffer_size / 2;
+	proc->alloc.free_async_space = proc->alloc.buffer_size / 2;
 	barrier();
 	proc->files = get_files_struct(current);
-	proc->vma = vma;
-	proc->vma_vm_mm = vma->vm_mm;
+	proc->alloc.vma = vma;
+	proc->alloc.vma_vm_mm = vma->vm_mm;
 
-	/*pr_info("binder_mmap: %d %lx-%lx maps %p\n",
-		 proc->pid, vma->vm_start, vma->vm_end, proc->buffer);*/
+	/*pr_info("binder_mmap: %d %lx-%lx maps %pK\n",
+	 *	proc->pid, vma->vm_start, vma->vm_end, proc->alloc.buffer);
+	 */
 	return 0;
 
 err_alloc_small_buf_failed:
-	kfree(proc->pages);
-	proc->pages = NULL;
+	kfree(proc->alloc.pages);
+	proc->alloc.pages = NULL;
 err_alloc_pages_failed:
 	mutex_lock(&binder_mmap_lock);
-	vfree(proc->buffer);
-	proc->buffer = NULL;
+	vfree(proc->alloc.buffer);
+	proc->alloc.buffer = NULL;
 err_get_vm_area_failed:
 err_already_mapped:
 	mutex_unlock(&binder_mmap_lock);
@@ -3595,7 +3637,7 @@ static void binder_deferred_release(struct binder_proc *proc)
 	int threads, nodes, incoming_refs, outgoing_refs, buffers,
 		active_transactions, page_count;
 
-	BUG_ON(proc->vma);
+	BUG_ON(proc->alloc.vma);
 	BUG_ON(proc->files);
 
 	hlist_del(&proc->proc_node);
@@ -3642,7 +3684,7 @@ static void binder_deferred_release(struct binder_proc *proc)
 	binder_release_work(&proc->delivered_death);
 
 	buffers = 0;
-	while ((n = rb_first(&proc->allocated_buffers))) {
+	while ((n = rb_first(&proc->alloc.allocated_buffers))) {
 		struct binder_buffer *buffer;
 
 		buffer = rb_entry(n, struct binder_buffer, rb_node);
@@ -3663,25 +3705,25 @@ static void binder_deferred_release(struct binder_proc *proc)
 	binder_stats_deleted(BINDER_STAT_PROC);
 
 	page_count = 0;
-	if (proc->pages) {
+	if (proc->alloc.pages) {
 		int i;
 
-		for (i = 0; i < proc->buffer_size / PAGE_SIZE; i++) {
+		for (i = 0; i < proc->alloc.buffer_size / PAGE_SIZE; i++) {
 			void *page_addr;
 
-			if (!proc->pages[i])
+			if (!proc->alloc.pages[i])
 				continue;
 
-			page_addr = proc->buffer + i * PAGE_SIZE;
+			page_addr = proc->alloc.buffer + i * PAGE_SIZE;
 			binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
 				     "%s: %d: page %d at %p not freed\n",
 				     __func__, proc->pid, i, page_addr);
 			unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
-			__free_page(proc->pages[i]);
+			__free_page(proc->alloc.pages[i]);
 			page_count++;
 		}
-		kfree(proc->pages);
-		vfree(proc->buffer);
+		kfree(proc->alloc.pages);
+		vfree(proc->alloc.buffer);
 	}
 
 	put_task_struct(proc->tsk);
@@ -3911,7 +3953,8 @@ static void print_binder_proc(struct seq_file *m,
 			print_binder_ref(m, rb_entry(n, struct binder_ref,
 						     rb_node_desc));
 	}
-	for (n = rb_first(&proc->allocated_buffers); n != NULL; n = rb_next(n))
+	for (n = rb_first(&proc->alloc.allocated_buffers);
+			n != NULL; n = rb_next(n))
 		print_binder_buffer(m, "  buffer",
 				    rb_entry(n, struct binder_buffer, rb_node));
 	list_for_each_entry(w, &proc->todo, entry)
@@ -4028,7 +4071,7 @@ static void print_binder_proc_stats(struct seq_file *m,
 			"  ready threads %d\n"
 			"  free async space %zd\n", proc->requested_threads,
 			proc->requested_threads_started, proc->max_threads,
-			proc->ready_threads, proc->free_async_space);
+			proc->ready_threads, proc->alloc.free_async_space);
 	count = 0;
 	for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n))
 		count++;
@@ -4046,7 +4089,8 @@ static void print_binder_proc_stats(struct seq_file *m,
 	seq_printf(m, "  refs: %d s %d w %d\n", count, strong, weak);
 
 	count = 0;
-	for (n = rb_first(&proc->allocated_buffers); n != NULL; n = rb_next(n))
+	for (n = rb_first(&proc->alloc.allocated_buffers);
+			n != NULL; n = rb_next(n))
 		count++;
 	seq_printf(m, "  buffers: %d\n", count);
 
diff --git a/drivers/android/binder_trace.h b/drivers/android/binder_trace.h
index 7f20f3dc8369..c835f09656c1 100644
--- a/drivers/android/binder_trace.h
+++ b/drivers/android/binder_trace.h
@@ -280,7 +280,7 @@ TRACE_EVENT(binder_update_page_range,
 	TP_fast_assign(
 		__entry->proc = proc->pid;
 		__entry->allocate = allocate;
-		__entry->offset = start - proc->buffer;
+		__entry->offset = start - proc->alloc.buffer;
 		__entry->size = end - start;
 	),
 	TP_printk("proc=%d allocate=%d offset=%zu size=%zu",
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 05/37] binder: remove unneeded cleanup code
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (3 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 04/37] binder: separate binder allocator structure from binder proc Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 06/37] binder: separate out binder_alloc functions Todd Kjos
                   ` (32 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

The buffer's transaction has already been freed before
binder_deferred_release. No need to do it again.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 1097d056ea6b..2cbd7558fc3f 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -3631,7 +3631,6 @@ static int binder_node_release(struct binder_node *node, int refs)
 
 static void binder_deferred_release(struct binder_proc *proc)
 {
-	struct binder_transaction *t;
 	struct binder_context *context = proc->context;
 	struct rb_node *n;
 	int threads, nodes, incoming_refs, outgoing_refs, buffers,
@@ -3689,14 +3688,8 @@ static void binder_deferred_release(struct binder_proc *proc)
 
 		buffer = rb_entry(n, struct binder_buffer, rb_node);
 
-		t = buffer->transaction;
-		if (t) {
-			t->buffer = NULL;
-			buffer->transaction = NULL;
-			pr_err("release proc %d, transaction %d, not freed\n",
-			       proc->pid, t->debug_id);
-			/*BUG();*/
-		}
+		/* Transaction should already have been freed */
+		BUG_ON(buffer->transaction);
 
 		binder_free_buf(proc, buffer);
 		buffers++;
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 06/37] binder: separate out binder_alloc functions
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (4 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 05/37] binder: remove unneeded cleanup code Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 07/37] binder: move binder_alloc to separate file Todd Kjos
                   ` (31 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Continuation of splitting the binder allocator from the binder
driver. Separate binder_alloc functions from normal binder
functions. Protect the allocator with a separate mutex.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c       | 649 +++++++++++++++++++++++++----------------
 drivers/android/binder_trace.h |   9 +-
 2 files changed, 410 insertions(+), 248 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 2cbd7558fc3f..6c1d0b5b352b 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -49,7 +49,7 @@
 
 static DEFINE_MUTEX(binder_main_lock);
 static DEFINE_MUTEX(binder_deferred_lock);
-static DEFINE_MUTEX(binder_mmap_lock);
+static DEFINE_MUTEX(binder_alloc_mmap_lock);
 
 static HLIST_HEAD(binder_devices);
 static HLIST_HEAD(binder_procs);
@@ -104,9 +104,7 @@ enum {
 	BINDER_DEBUG_TRANSACTION_COMPLETE   = 1U << 10,
 	BINDER_DEBUG_FREE_BUFFER            = 1U << 11,
 	BINDER_DEBUG_INTERNAL_REFS          = 1U << 12,
-	BINDER_DEBUG_BUFFER_ALLOC           = 1U << 13,
-	BINDER_DEBUG_PRIORITY_CAP           = 1U << 14,
-	BINDER_DEBUG_BUFFER_ALLOC_ASYNC     = 1U << 15,
+	BINDER_DEBUG_PRIORITY_CAP           = 1U << 13,
 };
 static uint32_t binder_debug_mask = BINDER_DEBUG_USER_ERROR |
 	BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION;
@@ -159,6 +157,27 @@ module_param_call(stop_on_user_error, binder_set_stop_on_user_error,
 #define to_binder_fd_array_object(hdr) \
 	container_of(hdr, struct binder_fd_array_object, hdr)
 
+/*
+ * debug declarations for binder_alloc. To be
+ * moved to binder_alloc.c
+ */
+enum {
+	BINDER_ALLOC_DEBUG_OPEN_CLOSE             = 1U << 1,
+	BINDER_ALLOC_DEBUG_BUFFER_ALLOC           = 1U << 2,
+	BINDER_ALLOC_DEBUG_BUFFER_ALLOC_ASYNC     = 1U << 3,
+};
+static uint32_t binder_alloc_debug_mask;
+
+module_param_named(alloc_debug_mask, binder_alloc_debug_mask,
+		   uint, 0644);
+
+#define binder_alloc_debug(mask, x...) \
+	do { \
+		if (binder_alloc_debug_mask & mask) \
+			pr_info(x); \
+	} while (0)
+/* end of binder_alloc debug declarations */
+
 enum binder_stat_types {
 	BINDER_STAT_PROC,
 	BINDER_STAT_THREAD,
@@ -342,6 +361,8 @@ enum binder_deferred_state {
  * struct binder_buffer objects used to track the user buffers
  */
 struct binder_alloc {
+	struct mutex mutex;
+	struct task_struct *tsk;
 	struct vm_area_struct *vma;
 	struct mm_struct *vma_vm_mm;
 	void *buffer;
@@ -352,6 +373,7 @@ struct binder_alloc {
 	size_t free_async_space;
 	struct page **pages;
 	size_t buffer_size;
+	int pid;
 };
 
 struct binder_proc {
@@ -423,6 +445,56 @@ struct binder_transaction {
 	kuid_t	sender_euid;
 };
 
+/*
+ * Forward declarations of binder_alloc functions.
+ * These will be moved to binder_alloc.h when
+ * binder_alloc is moved to its own files.
+ */
+extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
+						  size_t data_size,
+						  size_t offsets_size,
+						  size_t extra_buffers_size,
+						  int is_async);
+extern void binder_alloc_init(struct binder_alloc *alloc);
+extern void binder_alloc_vma_close(struct binder_alloc *alloc);
+extern struct binder_buffer *
+binder_alloc_buffer_lookup(struct binder_alloc *alloc,
+			   uintptr_t user_ptr);
+extern void binder_alloc_free_buf(struct binder_alloc *alloc,
+				  struct binder_buffer *buffer);
+extern int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+				     struct vm_area_struct *vma);
+extern void binder_alloc_deferred_release(struct binder_alloc *alloc);
+extern int binder_alloc_get_allocated_count(struct binder_alloc *alloc);
+extern void binder_alloc_print_allocated(struct seq_file *m,
+					 struct binder_alloc *alloc);
+
+static inline size_t
+binder_alloc_get_free_async_space(struct binder_alloc *alloc)
+{
+	size_t free_async_space;
+
+	mutex_lock(&alloc->mutex);
+	free_async_space = alloc->free_async_space;
+	mutex_unlock(&alloc->mutex);
+	return free_async_space;
+}
+
+static inline ptrdiff_t
+binder_alloc_get_user_buffer_offset(struct binder_alloc *alloc)
+{
+	/*
+	 * user_buffer_offset is constant if vma is set and
+	 * undefined if vma is not set. It is possible to
+	 * get here with !alloc->vma if the target process
+	 * is dying while a transaction is being initiated.
+	 * Returning the old value is ok in this case and
+	 * the transaction will fail.
+	 */
+	return alloc->user_buffer_offset;
+}
+/* end of binder_alloc declarations */
+
 static void
 binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
 
@@ -506,21 +578,20 @@ static void binder_set_nice(long nice)
 	binder_user_error("%d RLIMIT_NICE not set\n", current->pid);
 }
 
-static size_t binder_buffer_size(struct binder_proc *proc,
-				 struct binder_buffer *buffer)
+static size_t binder_alloc_buffer_size(struct binder_alloc *alloc,
+				       struct binder_buffer *buffer)
 {
-	if (list_is_last(&buffer->entry, &proc->alloc.buffers))
-		return proc->alloc.buffer +
-				proc->alloc.buffer_size -
-				(void *)buffer->data;
+	if (list_is_last(&buffer->entry, &alloc->buffers))
+		return alloc->buffer +
+		       alloc->buffer_size - (void *)buffer->data;
 	return (size_t)list_entry(buffer->entry.next,
 			  struct binder_buffer, entry) - (size_t)buffer->data;
 }
 
-static void binder_insert_free_buffer(struct binder_proc *proc,
+static void binder_insert_free_buffer(struct binder_alloc *alloc,
 				      struct binder_buffer *new_buffer)
 {
-	struct rb_node **p = &proc->alloc.free_buffers.rb_node;
+	struct rb_node **p = &alloc->free_buffers.rb_node;
 	struct rb_node *parent = NULL;
 	struct binder_buffer *buffer;
 	size_t buffer_size;
@@ -528,18 +599,18 @@ static void binder_insert_free_buffer(struct binder_proc *proc,
 
 	BUG_ON(!new_buffer->free);
 
-	new_buffer_size = binder_buffer_size(proc, new_buffer);
+	new_buffer_size = binder_alloc_buffer_size(alloc, new_buffer);
 
-	binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
-		     "%d: add free buffer, size %zd, at %p\n",
-		      proc->pid, new_buffer_size, new_buffer);
+	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+		     "%d: add free buffer, size %zd, at %pK\n",
+		      alloc->pid, new_buffer_size, new_buffer);
 
 	while (*p) {
 		parent = *p;
 		buffer = rb_entry(parent, struct binder_buffer, rb_node);
 		BUG_ON(!buffer->free);
 
-		buffer_size = binder_buffer_size(proc, buffer);
+		buffer_size = binder_alloc_buffer_size(alloc, buffer);
 
 		if (new_buffer_size < buffer_size)
 			p = &parent->rb_left;
@@ -547,13 +618,13 @@ static void binder_insert_free_buffer(struct binder_proc *proc,
 			p = &parent->rb_right;
 	}
 	rb_link_node(&new_buffer->rb_node, parent, p);
-	rb_insert_color(&new_buffer->rb_node, &proc->alloc.free_buffers);
+	rb_insert_color(&new_buffer->rb_node, &alloc->free_buffers);
 }
 
-static void binder_insert_allocated_buffer(struct binder_proc *proc,
+static void binder_insert_allocated_buffer(struct binder_alloc *alloc,
 					   struct binder_buffer *new_buffer)
 {
-	struct rb_node **p = &proc->alloc.allocated_buffers.rb_node;
+	struct rb_node **p = &alloc->allocated_buffers.rb_node;
 	struct rb_node *parent = NULL;
 	struct binder_buffer *buffer;
 
@@ -572,19 +643,19 @@ static void binder_insert_allocated_buffer(struct binder_proc *proc,
 			BUG();
 	}
 	rb_link_node(&new_buffer->rb_node, parent, p);
-	rb_insert_color(&new_buffer->rb_node, &proc->alloc.allocated_buffers);
+	rb_insert_color(&new_buffer->rb_node, &alloc->allocated_buffers);
 }
 
-static struct binder_buffer *binder_buffer_lookup(struct binder_proc *proc,
-						  uintptr_t user_ptr)
+static struct binder_buffer *binder_alloc_buffer_lookup_locked(
+		struct binder_alloc *alloc,
+		uintptr_t user_ptr)
 {
-	struct rb_node *n = proc->alloc.allocated_buffers.rb_node;
+	struct rb_node *n = alloc->allocated_buffers.rb_node;
 	struct binder_buffer *buffer;
 	struct binder_buffer *kern_ptr;
 
-	kern_ptr = (struct binder_buffer *)
-		(user_ptr - proc->alloc.user_buffer_offset -
-			offsetof(struct binder_buffer, data));
+	kern_ptr = (struct binder_buffer *)(user_ptr - alloc->user_buffer_offset
+		- offsetof(struct binder_buffer, data));
 
 	while (n) {
 		buffer = rb_entry(n, struct binder_buffer, rb_node);
@@ -600,7 +671,18 @@ static struct binder_buffer *binder_buffer_lookup(struct binder_proc *proc,
 	return NULL;
 }
 
-static int binder_update_page_range(struct binder_proc *proc, int allocate,
+struct binder_buffer *binder_alloc_buffer_lookup(struct binder_alloc *alloc,
+						 uintptr_t user_ptr)
+{
+	struct binder_buffer *buffer;
+
+	mutex_lock(&alloc->mutex);
+	buffer = binder_alloc_buffer_lookup_locked(alloc, user_ptr);
+	mutex_unlock(&alloc->mutex);
+	return buffer;
+}
+
+static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
 				    void *start, void *end,
 				    struct vm_area_struct *vma)
 {
@@ -609,26 +691,26 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
 	struct page **page;
 	struct mm_struct *mm;
 
-	binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
-		     "%d: %s pages %p-%p\n", proc->pid,
+	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+		     "%d: %s pages %pK-%pK\n", alloc->pid,
 		     allocate ? "allocate" : "free", start, end);
 
 	if (end <= start)
 		return 0;
 
-	trace_binder_update_page_range(proc, allocate, start, end);
+	trace_binder_update_page_range(alloc, allocate, start, end);
 
 	if (vma)
 		mm = NULL;
 	else
-		mm = get_task_mm(proc->tsk);
+		mm = get_task_mm(alloc->tsk);
 
 	if (mm) {
 		down_write(&mm->mmap_sem);
-		vma = proc->alloc.vma;
-		if (vma && mm != proc->alloc.vma_vm_mm) {
+		vma = alloc->vma;
+		if (vma && mm != alloc->vma_vm_mm) {
 			pr_err("%d: vma mm and task mm mismatch\n",
-				proc->pid);
+				alloc->pid);
 			vma = NULL;
 		}
 	}
@@ -638,21 +720,20 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
 
 	if (vma == NULL) {
 		pr_err("%d: binder_alloc_buf failed to map pages in userspace, no vma\n",
-			proc->pid);
+			alloc->pid);
 		goto err_no_vma;
 	}
 
 	for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
 		int ret;
 
-		page = &proc->alloc.pages[
-			(page_addr - proc->alloc.buffer) / PAGE_SIZE];
+		page = &alloc->pages[(page_addr - alloc->buffer) / PAGE_SIZE];
 
 		BUG_ON(*page);
 		*page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
 		if (*page == NULL) {
-			pr_err("%d: binder_alloc_buf failed for page at %p\n",
-				proc->pid, page_addr);
+			pr_err("%d: binder_alloc_buf failed for page at %pK\n",
+				alloc->pid, page_addr);
 			goto err_alloc_page_failed;
 		}
 		ret = map_kernel_range_noflush((unsigned long)page_addr,
@@ -660,16 +741,16 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
 		flush_cache_vmap((unsigned long)page_addr,
 				(unsigned long)page_addr + PAGE_SIZE);
 		if (ret != 1) {
-			pr_err("%d: binder_alloc_buf failed to map page at %p in kernel\n",
-			       proc->pid, page_addr);
+			pr_err("%d: binder_alloc_buf failed to map page at %pK in kernel\n",
+			       alloc->pid, page_addr);
 			goto err_map_kernel_failed;
 		}
 		user_page_addr =
-			(uintptr_t)page_addr + proc->alloc.user_buffer_offset;
+			(uintptr_t)page_addr + alloc->user_buffer_offset;
 		ret = vm_insert_page(vma, user_page_addr, page[0]);
 		if (ret) {
 			pr_err("%d: binder_alloc_buf failed to map page at %lx in userspace\n",
-			       proc->pid, user_page_addr);
+			       alloc->pid, user_page_addr);
 			goto err_vm_insert_page_failed;
 		}
 		/* vm_insert_page does not seem to increment the refcount */
@@ -683,11 +764,10 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
 free_range:
 	for (page_addr = end - PAGE_SIZE; page_addr >= start;
 	     page_addr -= PAGE_SIZE) {
-		page = &proc->alloc.pages[
-			(page_addr - proc->alloc.buffer) / PAGE_SIZE];
+		page = &alloc->pages[(page_addr - alloc->buffer) / PAGE_SIZE];
 		if (vma)
 			zap_page_range(vma, (uintptr_t)page_addr +
-				proc->alloc.user_buffer_offset, PAGE_SIZE);
+				alloc->user_buffer_offset, PAGE_SIZE);
 err_vm_insert_page_failed:
 		unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
 err_map_kernel_failed:
@@ -704,13 +784,11 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
 	return -ENOMEM;
 }
 
-static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
-					      size_t data_size,
-					      size_t offsets_size,
-					      size_t extra_buffers_size,
-					      int is_async)
+static struct binder_buffer *binder_alloc_new_buf_locked(
+		struct binder_alloc *alloc, size_t data_size,
+		size_t offsets_size, size_t extra_buffers_size, int is_async)
 {
-	struct rb_node *n = proc->alloc.free_buffers.rb_node;
+	struct rb_node *n = alloc->free_buffers.rb_node;
 	struct binder_buffer *buffer;
 	size_t buffer_size;
 	struct rb_node *best_fit = NULL;
@@ -718,9 +796,9 @@ static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
 	void *end_page_addr;
 	size_t size, data_offsets_size;
 
-	if (proc->alloc.vma == NULL) {
+	if (alloc->vma == NULL) {
 		pr_err("%d: binder_alloc_buf, no vma\n",
-		       proc->pid);
+		       alloc->pid);
 		return NULL;
 	}
 
@@ -728,29 +806,30 @@ static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
 		ALIGN(offsets_size, sizeof(void *));
 
 	if (data_offsets_size < data_size || data_offsets_size < offsets_size) {
-		binder_user_error("%d: got transaction with invalid size %zd-%zd\n",
-				proc->pid, data_size, offsets_size);
+		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+				"%d: got transaction with invalid size %zd-%zd\n",
+				alloc->pid, data_size, offsets_size);
 		return NULL;
 	}
 	size = data_offsets_size + ALIGN(extra_buffers_size, sizeof(void *));
 	if (size < data_offsets_size || size < extra_buffers_size) {
-		binder_user_error("%d: got transaction with invalid extra_buffers_size %zd\n",
-				  proc->pid, extra_buffers_size);
+		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+				"%d: got transaction with invalid extra_buffers_size %zd\n",
+				alloc->pid, extra_buffers_size);
 		return NULL;
 	}
 	if (is_async &&
-	    proc->alloc.free_async_space <
-			size + sizeof(struct binder_buffer)) {
-		binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
+	    alloc->free_async_space < size + sizeof(struct binder_buffer)) {
+		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
 			     "%d: binder_alloc_buf size %zd failed, no async space left\n",
-			      proc->pid, size);
+			      alloc->pid, size);
 		return NULL;
 	}
 
 	while (n) {
 		buffer = rb_entry(n, struct binder_buffer, rb_node);
 		BUG_ON(!buffer->free);
-		buffer_size = binder_buffer_size(proc, buffer);
+		buffer_size = binder_alloc_buffer_size(alloc, buffer);
 
 		if (size < buffer_size) {
 			best_fit = n;
@@ -764,17 +843,17 @@ static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
 	}
 	if (best_fit == NULL) {
 		pr_err("%d: binder_alloc_buf size %zd failed, no address space\n",
-			proc->pid, size);
+			alloc->pid, size);
 		return NULL;
 	}
 	if (n == NULL) {
 		buffer = rb_entry(best_fit, struct binder_buffer, rb_node);
-		buffer_size = binder_buffer_size(proc, buffer);
+		buffer_size = binder_alloc_buffer_size(alloc, buffer);
 	}
 
-	binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
-		     "%d: binder_alloc_buf size %zd got buffer %p size %zd\n",
-		      proc->pid, size, buffer, buffer_size);
+	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+		     "%d: %s size %zd got buffer %pK size %zd\n",
+		      alloc->pid, __func__, size, buffer, buffer_size);
 
 	has_page_addr =
 		(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK);
@@ -788,38 +867,52 @@ static struct binder_buffer *binder_alloc_buf(struct binder_proc *proc,
 		(void *)PAGE_ALIGN((uintptr_t)buffer->data + buffer_size);
 	if (end_page_addr > has_page_addr)
 		end_page_addr = has_page_addr;
-	if (binder_update_page_range(proc, 1,
+	if (binder_update_page_range(alloc, 1,
 	    (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr, NULL))
 		return NULL;
 
-	rb_erase(best_fit, &proc->alloc.free_buffers);
+	rb_erase(best_fit, &alloc->free_buffers);
 	buffer->free = 0;
-	binder_insert_allocated_buffer(proc, buffer);
+	binder_insert_allocated_buffer(alloc, buffer);
 	if (buffer_size != size) {
 		struct binder_buffer *new_buffer = (void *)buffer->data + size;
 
 		list_add(&new_buffer->entry, &buffer->entry);
 		new_buffer->free = 1;
-		binder_insert_free_buffer(proc, new_buffer);
+		binder_insert_free_buffer(alloc, new_buffer);
 	}
-	binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
-		     "%d: binder_alloc_buf size %zd got %p\n",
-		      proc->pid, size, buffer);
+	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+		     "%d: %s size %zd got %pK\n",
+		      alloc->pid, __func__, size, buffer);
 	buffer->data_size = data_size;
 	buffer->offsets_size = offsets_size;
-	buffer->extra_buffers_size = extra_buffers_size;
 	buffer->async_transaction = is_async;
+	buffer->extra_buffers_size = extra_buffers_size;
 	if (is_async) {
-		proc->alloc.free_async_space -=
-			size + sizeof(struct binder_buffer);
-		binder_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
+		alloc->free_async_space -= size + sizeof(struct binder_buffer);
+		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC_ASYNC,
 			     "%d: binder_alloc_buf size %zd async free %zd\n",
-			      proc->pid, size, proc->alloc.free_async_space);
+			      alloc->pid, size, alloc->free_async_space);
 	}
 
 	return buffer;
 }
 
+struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
+					   size_t data_size,
+					   size_t offsets_size,
+					   size_t extra_buffers_size,
+					   int is_async)
+{
+	struct binder_buffer *buffer;
+
+	mutex_lock(&alloc->mutex);
+	buffer = binder_alloc_new_buf_locked(alloc, data_size, offsets_size,
+					     extra_buffers_size, is_async);
+	mutex_unlock(&alloc->mutex);
+	return buffer;
+}
+
 static void *buffer_start_page(struct binder_buffer *buffer)
 {
 	return (void *)((uintptr_t)buffer & PAGE_MASK);
@@ -830,26 +923,26 @@ static void *buffer_end_page(struct binder_buffer *buffer)
 	return (void *)(((uintptr_t)(buffer + 1) - 1) & PAGE_MASK);
 }
 
-static void binder_delete_free_buffer(struct binder_proc *proc,
+static void binder_delete_free_buffer(struct binder_alloc *alloc,
 				      struct binder_buffer *buffer)
 {
 	struct binder_buffer *prev, *next = NULL;
 	int free_page_end = 1;
 	int free_page_start = 1;
 
-	BUG_ON(proc->alloc.buffers.next == &buffer->entry);
+	BUG_ON(alloc->buffers.next == &buffer->entry);
 	prev = list_entry(buffer->entry.prev, struct binder_buffer, entry);
 	BUG_ON(!prev->free);
 	if (buffer_end_page(prev) == buffer_start_page(buffer)) {
 		free_page_start = 0;
 		if (buffer_end_page(prev) == buffer_end_page(buffer))
 			free_page_end = 0;
-		binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
-			     "%d: merge free, buffer %p share page with %p\n",
-			      proc->pid, buffer, prev);
+		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+			     "%d: merge free, buffer %pK share page with %pK\n",
+			      alloc->pid, buffer, prev);
 	}
 
-	if (!list_is_last(&buffer->entry, &proc->alloc.buffers)) {
+	if (!list_is_last(&buffer->entry, &alloc->buffers)) {
 		next = list_entry(buffer->entry.next,
 				  struct binder_buffer, entry);
 		if (buffer_start_page(next) == buffer_end_page(buffer)) {
@@ -857,80 +950,88 @@ static void binder_delete_free_buffer(struct binder_proc *proc,
 			if (buffer_start_page(next) ==
 			    buffer_start_page(buffer))
 				free_page_start = 0;
-			binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
-				     "%d: merge free, buffer %p share page with %p\n",
-				      proc->pid, buffer, prev);
+			binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+				     "%d: merge free, buffer %pK share page with %pK\n",
+				      alloc->pid, buffer, prev);
 		}
 	}
 	list_del(&buffer->entry);
 	if (free_page_start || free_page_end) {
-		binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
-			     "%d: merge free, buffer %p do not share page%s%s with %p or %p\n",
-			     proc->pid, buffer, free_page_start ? "" : " end",
+		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+			     "%d: merge free, buffer %pK do not share page%s%s with %pK or %pK\n",
+			     alloc->pid, buffer, free_page_start ? "" : " end",
 			     free_page_end ? "" : " start", prev, next);
-		binder_update_page_range(proc, 0, free_page_start ?
+		binder_update_page_range(alloc, 0, free_page_start ?
 			buffer_start_page(buffer) : buffer_end_page(buffer),
 			(free_page_end ? buffer_end_page(buffer) :
 			buffer_start_page(buffer)) + PAGE_SIZE, NULL);
 	}
 }
 
-static void binder_free_buf(struct binder_proc *proc,
-			    struct binder_buffer *buffer)
+static void binder_free_buf_locked(struct binder_alloc *alloc,
+				   struct binder_buffer *buffer)
 {
 	size_t size, buffer_size;
 
-	buffer_size = binder_buffer_size(proc, buffer);
+	buffer_size = binder_alloc_buffer_size(alloc, buffer);
 
 	size = ALIGN(buffer->data_size, sizeof(void *)) +
 		ALIGN(buffer->offsets_size, sizeof(void *)) +
 		ALIGN(buffer->extra_buffers_size, sizeof(void *));
 
-	binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
-		     "%d: binder_free_buf %p size %zd buffer_size %zd\n",
-		      proc->pid, buffer, size, buffer_size);
+	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+		     "%d: binder_free_buf %pK size %zd buffer_size %zd\n",
+		      alloc->pid, buffer, size, buffer_size);
 
 	BUG_ON(buffer->free);
 	BUG_ON(size > buffer_size);
 	BUG_ON(buffer->transaction != NULL);
-	BUG_ON((void *)buffer < proc->alloc.buffer);
-	BUG_ON((void *)buffer > proc->alloc.buffer + proc->alloc.buffer_size);
+	BUG_ON((void *)buffer < alloc->buffer);
+	BUG_ON((void *)buffer > alloc->buffer + alloc->buffer_size);
 
 	if (buffer->async_transaction) {
-		proc->alloc.free_async_space +=
-			size + sizeof(struct binder_buffer);
+		alloc->free_async_space += size + sizeof(struct binder_buffer);
 
-		binder_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
+		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC_ASYNC,
 			     "%d: binder_free_buf size %zd async free %zd\n",
-			      proc->pid, size, proc->alloc.free_async_space);
+			      alloc->pid, size, alloc->free_async_space);
 	}
 
-	binder_update_page_range(proc, 0,
+	binder_update_page_range(alloc, 0,
 		(void *)PAGE_ALIGN((uintptr_t)buffer->data),
 		(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK),
 		NULL);
-	rb_erase(&buffer->rb_node, &proc->alloc.allocated_buffers);
+
+	rb_erase(&buffer->rb_node, &alloc->allocated_buffers);
 	buffer->free = 1;
-	if (!list_is_last(&buffer->entry, &proc->alloc.buffers)) {
+	if (!list_is_last(&buffer->entry, &alloc->buffers)) {
 		struct binder_buffer *next = list_entry(buffer->entry.next,
 						struct binder_buffer, entry);
 
 		if (next->free) {
-			rb_erase(&next->rb_node, &proc->alloc.free_buffers);
-			binder_delete_free_buffer(proc, next);
+			rb_erase(&next->rb_node, &alloc->free_buffers);
+			binder_delete_free_buffer(alloc, next);
 		}
 	}
-	if (proc->alloc.buffers.next != &buffer->entry) {
+	if (alloc->buffers.next != &buffer->entry) {
 		struct binder_buffer *prev = list_entry(buffer->entry.prev,
 						struct binder_buffer, entry);
 
 		if (prev->free) {
-			binder_delete_free_buffer(proc, buffer);
-			rb_erase(&prev->rb_node, &proc->alloc.free_buffers);
+			binder_delete_free_buffer(alloc, buffer);
+			rb_erase(&prev->rb_node, &alloc->free_buffers);
 			buffer = prev;
 		}
 	}
-	binder_insert_free_buffer(proc, buffer);
+	binder_insert_free_buffer(alloc, buffer);
+}
+
+void binder_alloc_free_buf(struct binder_alloc *alloc,
+			    struct binder_buffer *buffer)
+{
+	mutex_lock(&alloc->mutex);
+	binder_free_buf_locked(alloc, buffer);
+	mutex_unlock(&alloc->mutex);
 }
 
 static struct binder_node *binder_get_node(struct binder_proc *proc,
@@ -1564,7 +1665,8 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
 			 * back to kernel address space to access it
 			 */
 			parent_buffer = parent->buffer -
-				proc->alloc.user_buffer_offset;
+				binder_alloc_get_user_buffer_offset(
+						&proc->alloc);
 
 			fd_buf_size = sizeof(u32) * fda->num_fds;
 			if (fda->num_fds >= SIZE_MAX / sizeof(u32)) {
@@ -1782,7 +1884,8 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
 	 * Since the parent was already fixed up, convert it
 	 * back to the kernel address space to access it
 	 */
-	parent_buffer = parent->buffer - target_proc->alloc.user_buffer_offset;
+	parent_buffer = parent->buffer -
+		binder_alloc_get_user_buffer_offset(&target_proc->alloc);
 	fd_array = (u32 *)(parent_buffer + fda->parent_offset);
 	if (!IS_ALIGNED((unsigned long)fd_array, sizeof(u32))) {
 		binder_user_error("%d:%d parent offset not aligned correctly.\n",
@@ -1850,7 +1953,8 @@ static int binder_fixup_parent(struct binder_transaction *t,
 		return -EINVAL;
 	}
 	parent_buffer = (u8 *)(parent->buffer -
-			       target_proc->alloc.user_buffer_offset);
+			binder_alloc_get_user_buffer_offset(
+				&target_proc->alloc));
 	*(binder_uintptr_t *)(parent_buffer + bp->parent_offset) = bp->buffer;
 
 	return 0;
@@ -2036,7 +2140,7 @@ static void binder_transaction(struct binder_proc *proc,
 
 	trace_binder_transaction(reply, t, target_node);
 
-	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
+	t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
 		tr->offsets_size, extra_buffers_size,
 		!reply && (t->flags & TF_ONE_WAY));
 	if (t->buffer == NULL) {
@@ -2190,7 +2294,8 @@ static void binder_transaction(struct binder_proc *proc,
 			}
 			/* Fixup buffer pointer to target proc address space */
 			bp->buffer = (uintptr_t)sg_bufp +
-				target_proc->alloc.user_buffer_offset;
+				binder_alloc_get_user_buffer_offset(
+						&target_proc->alloc);
 			sg_bufp += ALIGN(bp->length, sizeof(u64));
 
 			ret = binder_fixup_parent(t, thread, bp, off_start,
@@ -2248,7 +2353,7 @@ static void binder_transaction(struct binder_proc *proc,
 	trace_binder_transaction_failed_buffer_release(t->buffer);
 	binder_transaction_buffer_release(target_proc, t->buffer, offp);
 	t->buffer->transaction = NULL;
-	binder_free_buf(target_proc, t->buffer);
+	binder_alloc_free_buf(&target_proc->alloc, t->buffer);
 err_binder_alloc_buf_failed:
 	kfree(tcomplete);
 	binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
@@ -2428,7 +2533,8 @@ static int binder_thread_write(struct binder_proc *proc,
 				return -EFAULT;
 			ptr += sizeof(binder_uintptr_t);
 
-			buffer = binder_buffer_lookup(proc, data_ptr);
+			buffer = binder_alloc_buffer_lookup(&proc->alloc,
+							    data_ptr);
 			if (buffer == NULL) {
 				binder_user_error("%d:%d BC_FREE_BUFFER u%016llx no match\n",
 					proc->pid, thread->pid, (u64)data_ptr);
@@ -2458,7 +2564,7 @@ static int binder_thread_write(struct binder_proc *proc,
 			}
 			trace_binder_transaction_buffer_release(buffer);
 			binder_transaction_buffer_release(proc, buffer, NULL);
-			binder_free_buf(proc, buffer);
+			binder_alloc_free_buf(&proc->alloc, buffer);
 			break;
 		}
 
@@ -2950,9 +3056,9 @@ static int binder_thread_read(struct binder_proc *proc,
 
 		tr.data_size = t->buffer->data_size;
 		tr.offsets_size = t->buffer->offsets_size;
-		tr.data.ptr.buffer = (binder_uintptr_t)(
-					(uintptr_t)t->buffer->data +
-					proc->alloc.user_buffer_offset);
+		tr.data.ptr.buffer = (binder_uintptr_t)
+			((uintptr_t)t->buffer->data +
+			binder_alloc_get_user_buffer_offset(&proc->alloc));
 		tr.data.ptr.offsets = tr.data.ptr.buffer +
 					ALIGN(t->buffer->data_size,
 					    sizeof(void *));
@@ -3361,6 +3467,12 @@ static void binder_vma_open(struct vm_area_struct *vma)
 		     (unsigned long)pgprot_val(vma->vm_page_prot));
 }
 
+void binder_alloc_vma_close(struct binder_alloc *alloc)
+{
+	WRITE_ONCE(alloc->vma, NULL);
+	WRITE_ONCE(alloc->vma_vm_mm, NULL);
+}
+
 static void binder_vma_close(struct vm_area_struct *vma)
 {
 	struct binder_proc *proc = vma->vm_private_data;
@@ -3370,8 +3482,7 @@ static void binder_vma_close(struct vm_area_struct *vma)
 		     proc->pid, vma->vm_start, vma->vm_end,
 		     (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,
 		     (unsigned long)pgprot_val(vma->vm_page_prot));
-	proc->alloc.vma = NULL;
-	proc->alloc.vma_vm_mm = NULL;
+	binder_alloc_vma_close(&proc->alloc);
 	binder_defer_work(proc, BINDER_DEFERRED_PUT_FILES);
 }
 
@@ -3386,35 +3497,16 @@ static const struct vm_operations_struct binder_vm_ops = {
 	.fault = binder_vm_fault,
 };
 
-static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
+int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+			      struct vm_area_struct *vma)
 {
 	int ret;
 	struct vm_struct *area;
-	struct binder_proc *proc = filp->private_data;
 	const char *failure_string;
 	struct binder_buffer *buffer;
 
-	if (proc->tsk != current)
-		return -EINVAL;
-
-	if ((vma->vm_end - vma->vm_start) > SZ_4M)
-		vma->vm_end = vma->vm_start + SZ_4M;
-
-	binder_debug(BINDER_DEBUG_OPEN_CLOSE,
-		     "binder_mmap: %d %lx-%lx (%ld K) vma %lx pagep %lx\n",
-		     proc->pid, vma->vm_start, vma->vm_end,
-		     (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,
-		     (unsigned long)pgprot_val(vma->vm_page_prot));
-
-	if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {
-		ret = -EPERM;
-		failure_string = "bad vm_flags";
-		goto err_bad_arg;
-	}
-	vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;
-
-	mutex_lock(&binder_mmap_lock);
-	if (proc->alloc.buffer) {
+	mutex_lock(&binder_alloc_mmap_lock);
+	if (alloc->buffer) {
 		ret = -EBUSY;
 		failure_string = "already mapped";
 		goto err_already_mapped;
@@ -3426,75 +3518,111 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
 		failure_string = "get_vm_area";
 		goto err_get_vm_area_failed;
 	}
-	proc->alloc.buffer = area->addr;
-	proc->alloc.user_buffer_offset =
-		vma->vm_start - (uintptr_t)proc->alloc.buffer;
-	mutex_unlock(&binder_mmap_lock);
+	alloc->buffer = area->addr;
+	alloc->user_buffer_offset =
+			vma->vm_start - (uintptr_t)alloc->buffer;
+	mutex_unlock(&binder_alloc_mmap_lock);
 
 #ifdef CONFIG_CPU_CACHE_VIPT
 	if (cache_is_vipt_aliasing()) {
 		while (CACHE_COLOUR(
-			(vma->vm_start ^ (uint32_t)proc->alloc.buffer))) {
 			pr_info("%s: %d %lx-%lx maps %pK bad alignment\n",
 				__func__,
-				proc->pid, vma->vm_start,
-				vma->vm_end, proc->alloc.buffer);
+				alloc->pid, vma->vm_start, vma->vm_end,
+				alloc->buffer);
 			vma->vm_start += PAGE_SIZE;
 		}
 	}
 #endif
-	proc->alloc.pages =
-		kzalloc(sizeof(proc->alloc.pages[0]) *
-				((vma->vm_end - vma->vm_start) / PAGE_SIZE),
-			GFP_KERNEL);
-	if (proc->alloc.pages == NULL) {
+	alloc->pages = kzalloc(sizeof(alloc->pages[0]) *
+				   ((vma->vm_end - vma->vm_start) / PAGE_SIZE),
+			       GFP_KERNEL);
+	if (alloc->pages == NULL) {
 		ret = -ENOMEM;
 		failure_string = "alloc page array";
 		goto err_alloc_pages_failed;
 	}
-	proc->alloc.buffer_size = vma->vm_end - vma->vm_start;
+	alloc->buffer_size = vma->vm_end - vma->vm_start;
 
-	vma->vm_ops = &binder_vm_ops;
-	vma->vm_private_data = proc;
-
-	if (binder_update_page_range(proc, 1, proc->alloc.buffer,
-				     proc->alloc.buffer + PAGE_SIZE, vma)) {
+	if (binder_update_page_range(alloc, 1, alloc->buffer,
+				     alloc->buffer + PAGE_SIZE, vma)) {
 		ret = -ENOMEM;
 		failure_string = "alloc small buf";
 		goto err_alloc_small_buf_failed;
 	}
-	buffer = proc->alloc.buffer;
-	INIT_LIST_HEAD(&proc->alloc.buffers);
-	list_add(&buffer->entry, &proc->alloc.buffers);
+	buffer = alloc->buffer;
+	INIT_LIST_HEAD(&alloc->buffers);
+	list_add(&buffer->entry, &alloc->buffers);
 	buffer->free = 1;
-	binder_insert_free_buffer(proc, buffer);
-	proc->alloc.free_async_space = proc->alloc.buffer_size / 2;
+	binder_insert_free_buffer(alloc, buffer);
+	alloc->free_async_space = alloc->buffer_size / 2;
 	barrier();
-	proc->files = get_files_struct(current);
-	proc->alloc.vma = vma;
-	proc->alloc.vma_vm_mm = vma->vm_mm;
+	alloc->vma = vma;
+	alloc->vma_vm_mm = vma->vm_mm;
 
-	/*pr_info("binder_mmap: %d %lx-%lx maps %pK\n",
-	 *	proc->pid, vma->vm_start, vma->vm_end, proc->alloc.buffer);
-	 */
 	return 0;
 
 err_alloc_small_buf_failed:
-	kfree(proc->alloc.pages);
-	proc->alloc.pages = NULL;
+	kfree(alloc->pages);
+	alloc->pages = NULL;
 err_alloc_pages_failed:
-	mutex_lock(&binder_mmap_lock);
-	vfree(proc->alloc.buffer);
-	proc->alloc.buffer = NULL;
+	mutex_lock(&binder_alloc_mmap_lock);
+	vfree(alloc->buffer);
+	alloc->buffer = NULL;
 err_get_vm_area_failed:
 err_already_mapped:
-	mutex_unlock(&binder_mmap_lock);
+	mutex_unlock(&binder_alloc_mmap_lock);
+	pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
+	       alloc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
+	return ret;
+}
+
+static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+	int ret;
+	struct binder_proc *proc = filp->private_data;
+	const char *failure_string;
+
+	if (proc->tsk != current->group_leader)
+		return -EINVAL;
+
+	if ((vma->vm_end - vma->vm_start) > SZ_4M)
+		vma->vm_end = vma->vm_start + SZ_4M;
+
+	binder_debug(BINDER_DEBUG_OPEN_CLOSE,
+		     "%s: %d %lx-%lx (%ld K) vma %lx pagep %lx\n",
+		     __func__, proc->pid, vma->vm_start, vma->vm_end,
+		     (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,
+		     (unsigned long)pgprot_val(vma->vm_page_prot));
+
+	if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {
+		ret = -EPERM;
+		failure_string = "bad vm_flags";
+		goto err_bad_arg;
+	}
+	vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;
+	vma->vm_ops = &binder_vm_ops;
+	vma->vm_private_data = proc;
+
+	ret = binder_alloc_mmap_handler(&proc->alloc, vma);
+	if (ret)
+		return ret;
+	proc->files = get_files_struct(current);
+	return 0;
+
 err_bad_arg:
 	pr_err("binder_mmap: %d %lx-%lx %s failed %d\n",
 	       proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
 	return ret;
 }
 
+void binder_alloc_init(struct binder_alloc *alloc)
+{
+	alloc->tsk = current->group_leader;
+	alloc->pid = current->group_leader->pid;
+	mutex_init(&alloc->mutex);
+}
+
 static int binder_open(struct inode *nodp, struct file *filp)
 {
 	struct binder_proc *proc;
@@ -3514,6 +3642,7 @@ static int binder_open(struct inode *nodp, struct file *filp)
 	binder_dev = container_of(filp->private_data, struct binder_device,
 				  miscdev);
 	proc->context = &binder_dev->context;
+	binder_alloc_init(&proc->alloc);
 
 	binder_lock(__func__);
 
@@ -3629,14 +3758,61 @@ static int binder_node_release(struct binder_node *node, int refs)
 	return refs;
 }
 
+void binder_alloc_deferred_release(struct binder_alloc *alloc)
+{
+	struct rb_node *n;
+	int buffers, page_count;
+
+	BUG_ON(alloc->vma);
+
+	buffers = 0;
+	mutex_lock(&alloc->mutex);
+	while ((n = rb_first(&alloc->allocated_buffers))) {
+		struct binder_buffer *buffer;
+
+		buffer = rb_entry(n, struct binder_buffer, rb_node);
+
+		/* Transaction should already have been freed */
+		BUG_ON(buffer->transaction);
+
+		binder_free_buf_locked(alloc, buffer);
+		buffers++;
+	}
+
+	page_count = 0;
+	if (alloc->pages) {
+		int i;
+
+		for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
+			void *page_addr;
+
+			if (!alloc->pages[i])
+				continue;
+
+			page_addr = alloc->buffer + i * PAGE_SIZE;
+			binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
+				     "%s: %d: page %d at %pK not freed\n",
+				     __func__, alloc->pid, i, page_addr);
+			unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
+			__free_page(alloc->pages[i]);
+			page_count++;
+		}
+		kfree(alloc->pages);
+		vfree(alloc->buffer);
+	}
+	mutex_unlock(&alloc->mutex);
+
+	binder_alloc_debug(BINDER_ALLOC_DEBUG_OPEN_CLOSE,
+		     "%s: %d buffers %d, pages %d\n",
+		     __func__, alloc->pid, buffers, page_count);
+}
+
 static void binder_deferred_release(struct binder_proc *proc)
 {
 	struct binder_context *context = proc->context;
 	struct rb_node *n;
-	int threads, nodes, incoming_refs, outgoing_refs, buffers,
-		active_transactions, page_count;
+	int threads, nodes, incoming_refs, outgoing_refs, active_transactions;
 
-	BUG_ON(proc->alloc.vma);
 	BUG_ON(proc->files);
 
 	hlist_del(&proc->proc_node);
@@ -3682,49 +3858,15 @@ static void binder_deferred_release(struct binder_proc *proc)
 	binder_release_work(&proc->todo);
 	binder_release_work(&proc->delivered_death);
 
-	buffers = 0;
-	while ((n = rb_first(&proc->alloc.allocated_buffers))) {
-		struct binder_buffer *buffer;
-
-		buffer = rb_entry(n, struct binder_buffer, rb_node);
-
-		/* Transaction should already have been freed */
-		BUG_ON(buffer->transaction);
-
-		binder_free_buf(proc, buffer);
-		buffers++;
-	}
-
+	binder_alloc_deferred_release(&proc->alloc);
 	binder_stats_deleted(BINDER_STAT_PROC);
 
-	page_count = 0;
-	if (proc->alloc.pages) {
-		int i;
-
-		for (i = 0; i < proc->alloc.buffer_size / PAGE_SIZE; i++) {
-			void *page_addr;
-
-			if (!proc->alloc.pages[i])
-				continue;
-
-			page_addr = proc->alloc.buffer + i * PAGE_SIZE;
-			binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
-				     "%s: %d: page %d at %p not freed\n",
-				     __func__, proc->pid, i, page_addr);
-			unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
-			__free_page(proc->alloc.pages[i]);
-			page_count++;
-		}
-		kfree(proc->alloc.pages);
-		vfree(proc->alloc.buffer);
-	}
-
 	put_task_struct(proc->tsk);
 
 	binder_debug(BINDER_DEBUG_OPEN_CLOSE,
-		     "%s: %d threads %d, nodes %d (ref %d), refs %d, active transactions %d, buffers %d, pages %d\n",
+		     "%s: %d threads %d, nodes %d (ref %d), refs %d, active transactions %d\n",
 		     __func__, proc->pid, threads, nodes, incoming_refs,
-		     outgoing_refs, active_transactions, buffers, page_count);
+		     outgoing_refs, active_transactions);
 
 	kfree(proc);
 }
@@ -3807,15 +3949,6 @@ static void print_binder_transaction(struct seq_file *m, const char *prefix,
 		   t->buffer->data);
 }
 
-static void print_binder_buffer(struct seq_file *m, const char *prefix,
-				struct binder_buffer *buffer)
-{
-	seq_printf(m, "%s %d: %p size %zd:%zd %s\n",
-		   prefix, buffer->debug_id, buffer->data,
-		   buffer->data_size, buffer->offsets_size,
-		   buffer->transaction ? "active" : "delivered");
-}
-
 static void print_binder_work(struct seq_file *m, const char *prefix,
 			      const char *transaction_prefix,
 			      struct binder_work *w)
@@ -3918,6 +4051,27 @@ static void print_binder_ref(struct seq_file *m, struct binder_ref *ref)
 		   ref->node->debug_id, ref->strong, ref->weak, ref->death);
 }
 
+static void print_binder_buffer(struct seq_file *m, const char *prefix,
+				struct binder_buffer *buffer)
+{
+	seq_printf(m, "%s %d: %pK size %zd:%zd %s\n",
+		   prefix, buffer->debug_id, buffer->data,
+		   buffer->data_size, buffer->offsets_size,
+		   buffer->transaction ? "active" : "delivered");
+}
+
+void binder_alloc_print_allocated(struct seq_file *m,
+				  struct binder_alloc *alloc)
+{
+	struct rb_node *n;
+
+	mutex_lock(&alloc->mutex);
+	for (n = rb_first(&alloc->allocated_buffers); n != NULL; n = rb_next(n))
+		print_binder_buffer(m, "  buffer",
+				    rb_entry(n, struct binder_buffer, rb_node));
+	mutex_unlock(&alloc->mutex);
+}
+
 static void print_binder_proc(struct seq_file *m,
 			      struct binder_proc *proc, int print_all)
 {
@@ -3946,10 +4100,7 @@ static void print_binder_proc(struct seq_file *m,
 			print_binder_ref(m, rb_entry(n, struct binder_ref,
 						     rb_node_desc));
 	}
-	for (n = rb_first(&proc->alloc.allocated_buffers);
-			n != NULL; n = rb_next(n))
-		print_binder_buffer(m, "  buffer",
-				    rb_entry(n, struct binder_buffer, rb_node));
+	binder_alloc_print_allocated(m, &proc->alloc);
 	list_for_each_entry(w, &proc->todo, entry)
 		print_binder_work(m, "  ", "  pending transaction", w);
 	list_for_each_entry(w, &proc->delivered_death, entry) {
@@ -4047,6 +4198,18 @@ static void print_binder_stats(struct seq_file *m, const char *prefix,
 	}
 }
 
+int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
+{
+	struct rb_node *n;
+	int count = 0;
+
+	mutex_lock(&alloc->mutex);
+	for (n = rb_first(&alloc->allocated_buffers); n != NULL; n = rb_next(n))
+		count++;
+	mutex_unlock(&alloc->mutex);
+	return count;
+}
+
 static void print_binder_proc_stats(struct seq_file *m,
 				    struct binder_proc *proc)
 {
@@ -4064,7 +4227,8 @@ static void print_binder_proc_stats(struct seq_file *m,
 			"  ready threads %d\n"
 			"  free async space %zd\n", proc->requested_threads,
 			proc->requested_threads_started, proc->max_threads,
-			proc->ready_threads, proc->alloc.free_async_space);
+			proc->ready_threads,
+			binder_alloc_get_free_async_space(&proc->alloc));
 	count = 0;
 	for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n))
 		count++;
@@ -4081,10 +4245,7 @@ static void print_binder_proc_stats(struct seq_file *m,
 	}
 	seq_printf(m, "  refs: %d s %d w %d\n", count, strong, weak);
 
-	count = 0;
-	for (n = rb_first(&proc->alloc.allocated_buffers);
-			n != NULL; n = rb_next(n))
-		count++;
+	count = binder_alloc_get_allocated_count(&proc->alloc);
 	seq_printf(m, "  buffers: %d\n", count);
 
 	count = 0;
diff --git a/drivers/android/binder_trace.h b/drivers/android/binder_trace.h
index c835f09656c1..50b0d21f42cf 100644
--- a/drivers/android/binder_trace.h
+++ b/drivers/android/binder_trace.h
@@ -23,6 +23,7 @@
 struct binder_buffer;
 struct binder_node;
 struct binder_proc;
+struct binder_alloc;
 struct binder_ref;
 struct binder_thread;
 struct binder_transaction;
@@ -268,9 +269,9 @@ DEFINE_EVENT(binder_buffer_class, binder_transaction_failed_buffer_release,
 	TP_ARGS(buffer));
 
 TRACE_EVENT(binder_update_page_range,
-	TP_PROTO(struct binder_proc *proc, bool allocate,
+	TP_PROTO(struct binder_alloc *alloc, bool allocate,
 		 void *start, void *end),
-	TP_ARGS(proc, allocate, start, end),
+	TP_ARGS(alloc, allocate, start, end),
 	TP_STRUCT__entry(
 		__field(int, proc)
 		__field(bool, allocate)
@@ -278,9 +279,9 @@ TRACE_EVENT(binder_update_page_range,
 		__field(size_t, size)
 	),
 	TP_fast_assign(
-		__entry->proc = proc->pid;
+		__entry->proc = alloc->pid;
 		__entry->allocate = allocate;
-		__entry->offset = start - proc->alloc.buffer;
+		__entry->offset = start - alloc->buffer;
 		__entry->size = end - start;
 	),
 	TP_printk("proc=%d allocate=%d offset=%zu size=%zu",
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 07/37] binder: move binder_alloc to separate file
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (5 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 06/37] binder: separate out binder_alloc functions Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 08/37] binder: remove binder_debug_no_lock mechanism Todd Kjos
                   ` (30 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Move the binder allocator functionality to its own file

Continuation of splitting the binder allocator from the binder
driver. Split binder_alloc functions from normal binder functions.

Add kernel doc comments to functions declared extern in
binder_alloc.h

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/Makefile       |   2 +-
 drivers/android/binder.c       | 763 +----------------------------------------
 drivers/android/binder_alloc.c | 759 ++++++++++++++++++++++++++++++++++++++++
 drivers/android/binder_alloc.h | 162 +++++++++
 4 files changed, 923 insertions(+), 763 deletions(-)
 create mode 100644 drivers/android/binder_alloc.c
 create mode 100644 drivers/android/binder_alloc.h

diff --git a/drivers/android/Makefile b/drivers/android/Makefile
index 3b7e4b072c58..4b7c726bb560 100644
--- a/drivers/android/Makefile
+++ b/drivers/android/Makefile
@@ -1,3 +1,3 @@
 ccflags-y += -I$(src)			# needed for trace events
 
-obj-$(CONFIG_ANDROID_BINDER_IPC)	+= binder.o
+obj-$(CONFIG_ANDROID_BINDER_IPC)	+= binder.o binder_alloc.o
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 6c1d0b5b352b..0512971cfc53 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -24,7 +24,6 @@
 #include <linux/fs.h>
 #include <linux/list.h>
 #include <linux/miscdevice.h>
-#include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
 #include <linux/nsproxy.h>
@@ -35,8 +34,6 @@
 #include <linux/sched/mm.h>
 #include <linux/seq_file.h>
 #include <linux/uaccess.h>
-#include <linux/vmalloc.h>
-#include <linux/slab.h>
 #include <linux/pid_namespace.h>
 #include <linux/security.h>
 
@@ -45,11 +42,11 @@
 #endif
 
 #include <uapi/linux/android/binder.h>
+#include "binder_alloc.h"
 #include "binder_trace.h"
 
 static DEFINE_MUTEX(binder_main_lock);
 static DEFINE_MUTEX(binder_deferred_lock);
-static DEFINE_MUTEX(binder_alloc_mmap_lock);
 
 static HLIST_HEAD(binder_devices);
 static HLIST_HEAD(binder_procs);
@@ -157,27 +154,6 @@ module_param_call(stop_on_user_error, binder_set_stop_on_user_error,
 #define to_binder_fd_array_object(hdr) \
 	container_of(hdr, struct binder_fd_array_object, hdr)
 
-/*
- * debug declarations for binder_alloc. To be
- * moved to binder_alloc.c
- */
-enum {
-	BINDER_ALLOC_DEBUG_OPEN_CLOSE             = 1U << 1,
-	BINDER_ALLOC_DEBUG_BUFFER_ALLOC           = 1U << 2,
-	BINDER_ALLOC_DEBUG_BUFFER_ALLOC_ASYNC     = 1U << 3,
-};
-static uint32_t binder_alloc_debug_mask;
-
-module_param_named(alloc_debug_mask, binder_alloc_debug_mask,
-		   uint, 0644);
-
-#define binder_alloc_debug(mask, x...) \
-	do { \
-		if (binder_alloc_debug_mask & mask) \
-			pr_info(x); \
-	} while (0)
-/* end of binder_alloc debug declarations */
-
 enum binder_stat_types {
 	BINDER_STAT_PROC,
 	BINDER_STAT_THREAD,
@@ -314,68 +290,12 @@ struct binder_ref {
 	struct binder_ref_death *death;
 };
 
-struct binder_buffer {
-	struct list_head entry; /* free and allocated entries by address */
-	struct rb_node rb_node; /* free entry by size or allocated entry */
-				/* by address */
-	unsigned free:1;
-	unsigned allow_user_free:1;
-	unsigned async_transaction:1;
-	unsigned debug_id:29;
-
-	struct binder_transaction *transaction;
-
-	struct binder_node *target_node;
-	size_t data_size;
-	size_t offsets_size;
-	size_t extra_buffers_size;
-	uint8_t data[0];
-};
-
 enum binder_deferred_state {
 	BINDER_DEFERRED_PUT_FILES    = 0x01,
 	BINDER_DEFERRED_FLUSH        = 0x02,
 	BINDER_DEFERRED_RELEASE      = 0x04,
 };
 
-/**
- * struct binder_alloc - per-binder proc state for binder allocator
- * @vma:               vm_area_struct passed to mmap_handler
- *                     (invarient after mmap)
- * @vma_vm_mm:         copy of vma->vm_mm (invarient after mmap)
- * @buffer:            base of per-proc address space mapped via mmap
- * @user_buffer_offset: offset between user and kernel VAs for buffer
- * @buffers:           list of all buffers for this proc
- * @free_buffers:      rb tree of buffers available for allocation
- *                     sorted by size
- * @allocated_buffers: rb tree of allocated buffers sorted by address
- * @free_async_space:  VA space available for async buffers. This is
- *                     initialized at mmap time to 1/2 the full VA space
- * @pages:             array of physical page addresses for each page of
- *                     mmap'd space
- * @buffer_size:       size of address space (could be less than requested)
- *
- * Bookkeeping structure for per-proc address space management for binder
- * buffers. It is normally initialized during binder_init() and binder_mmap()
- * calls. The address space is used for both user-visible buffers and for
- * struct binder_buffer objects used to track the user buffers
- */
-struct binder_alloc {
-	struct mutex mutex;
-	struct task_struct *tsk;
-	struct vm_area_struct *vma;
-	struct mm_struct *vma_vm_mm;
-	void *buffer;
-	ptrdiff_t user_buffer_offset;
-	struct list_head buffers;
-	struct rb_root free_buffers;
-	struct rb_root allocated_buffers;
-	size_t free_async_space;
-	struct page **pages;
-	size_t buffer_size;
-	int pid;
-};
-
 struct binder_proc {
 	struct hlist_node proc_node;
 	struct rb_root threads;
@@ -445,56 +365,6 @@ struct binder_transaction {
 	kuid_t	sender_euid;
 };
 
-/*
- * Forward declarations of binder_alloc functions.
- * These will be moved to binder_alloc.h when
- * binder_alloc is moved to its own files.
- */
-extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
-						  size_t data_size,
-						  size_t offsets_size,
-						  size_t extra_buffers_size,
-						  int is_async);
-extern void binder_alloc_init(struct binder_alloc *alloc);
-extern void binder_alloc_vma_close(struct binder_alloc *alloc);
-extern struct binder_buffer *
-binder_alloc_buffer_lookup(struct binder_alloc *alloc,
-			   uintptr_t user_ptr);
-extern void binder_alloc_free_buf(struct binder_alloc *alloc,
-				  struct binder_buffer *buffer);
-extern int binder_alloc_mmap_handler(struct binder_alloc *alloc,
-				     struct vm_area_struct *vma);
-extern void binder_alloc_deferred_release(struct binder_alloc *alloc);
-extern int binder_alloc_get_allocated_count(struct binder_alloc *alloc);
-extern void binder_alloc_print_allocated(struct seq_file *m,
-					 struct binder_alloc *alloc);
-
-static inline size_t
-binder_alloc_get_free_async_space(struct binder_alloc *alloc)
-{
-	size_t free_async_space;
-
-	mutex_lock(&alloc->mutex);
-	free_async_space = alloc->free_async_space;
-	mutex_unlock(&alloc->mutex);
-	return free_async_space;
-}
-
-static inline ptrdiff_t
-binder_alloc_get_user_buffer_offset(struct binder_alloc *alloc)
-{
-	/*
-	 * user_buffer_offset is constant if vma is set and
-	 * undefined if vma is not set. It is possible to
-	 * get here with !alloc->vma if the target process
-	 * is dying while a transaction is being initiated.
-	 * Returning the old value is ok in this case and
-	 * the transaction will fail.
-	 */
-	return alloc->user_buffer_offset;
-}
-/* end of binder_alloc declarations */
-
 static void
 binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
 
@@ -578,462 +448,6 @@ static void binder_set_nice(long nice)
 	binder_user_error("%d RLIMIT_NICE not set\n", current->pid);
 }
 
-static size_t binder_alloc_buffer_size(struct binder_alloc *alloc,
-				       struct binder_buffer *buffer)
-{
-	if (list_is_last(&buffer->entry, &alloc->buffers))
-		return alloc->buffer +
-		       alloc->buffer_size - (void *)buffer->data;
-	return (size_t)list_entry(buffer->entry.next,
-			  struct binder_buffer, entry) - (size_t)buffer->data;
-}
-
-static void binder_insert_free_buffer(struct binder_alloc *alloc,
-				      struct binder_buffer *new_buffer)
-{
-	struct rb_node **p = &alloc->free_buffers.rb_node;
-	struct rb_node *parent = NULL;
-	struct binder_buffer *buffer;
-	size_t buffer_size;
-	size_t new_buffer_size;
-
-	BUG_ON(!new_buffer->free);
-
-	new_buffer_size = binder_alloc_buffer_size(alloc, new_buffer);
-
-	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-		     "%d: add free buffer, size %zd, at %pK\n",
-		      alloc->pid, new_buffer_size, new_buffer);
-
-	while (*p) {
-		parent = *p;
-		buffer = rb_entry(parent, struct binder_buffer, rb_node);
-		BUG_ON(!buffer->free);
-
-		buffer_size = binder_alloc_buffer_size(alloc, buffer);
-
-		if (new_buffer_size < buffer_size)
-			p = &parent->rb_left;
-		else
-			p = &parent->rb_right;
-	}
-	rb_link_node(&new_buffer->rb_node, parent, p);
-	rb_insert_color(&new_buffer->rb_node, &alloc->free_buffers);
-}
-
-static void binder_insert_allocated_buffer(struct binder_alloc *alloc,
-					   struct binder_buffer *new_buffer)
-{
-	struct rb_node **p = &alloc->allocated_buffers.rb_node;
-	struct rb_node *parent = NULL;
-	struct binder_buffer *buffer;
-
-	BUG_ON(new_buffer->free);
-
-	while (*p) {
-		parent = *p;
-		buffer = rb_entry(parent, struct binder_buffer, rb_node);
-		BUG_ON(buffer->free);
-
-		if (new_buffer < buffer)
-			p = &parent->rb_left;
-		else if (new_buffer > buffer)
-			p = &parent->rb_right;
-		else
-			BUG();
-	}
-	rb_link_node(&new_buffer->rb_node, parent, p);
-	rb_insert_color(&new_buffer->rb_node, &alloc->allocated_buffers);
-}
-
-static struct binder_buffer *binder_alloc_buffer_lookup_locked(
-		struct binder_alloc *alloc,
-		uintptr_t user_ptr)
-{
-	struct rb_node *n = alloc->allocated_buffers.rb_node;
-	struct binder_buffer *buffer;
-	struct binder_buffer *kern_ptr;
-
-	kern_ptr = (struct binder_buffer *)(user_ptr - alloc->user_buffer_offset
-		- offsetof(struct binder_buffer, data));
-
-	while (n) {
-		buffer = rb_entry(n, struct binder_buffer, rb_node);
-		BUG_ON(buffer->free);
-
-		if (kern_ptr < buffer)
-			n = n->rb_left;
-		else if (kern_ptr > buffer)
-			n = n->rb_right;
-		else
-			return buffer;
-	}
-	return NULL;
-}
-
-struct binder_buffer *binder_alloc_buffer_lookup(struct binder_alloc *alloc,
-						 uintptr_t user_ptr)
-{
-	struct binder_buffer *buffer;
-
-	mutex_lock(&alloc->mutex);
-	buffer = binder_alloc_buffer_lookup_locked(alloc, user_ptr);
-	mutex_unlock(&alloc->mutex);
-	return buffer;
-}
-
-static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
-				    void *start, void *end,
-				    struct vm_area_struct *vma)
-{
-	void *page_addr;
-	unsigned long user_page_addr;
-	struct page **page;
-	struct mm_struct *mm;
-
-	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-		     "%d: %s pages %pK-%pK\n", alloc->pid,
-		     allocate ? "allocate" : "free", start, end);
-
-	if (end <= start)
-		return 0;
-
-	trace_binder_update_page_range(alloc, allocate, start, end);
-
-	if (vma)
-		mm = NULL;
-	else
-		mm = get_task_mm(alloc->tsk);
-
-	if (mm) {
-		down_write(&mm->mmap_sem);
-		vma = alloc->vma;
-		if (vma && mm != alloc->vma_vm_mm) {
-			pr_err("%d: vma mm and task mm mismatch\n",
-				alloc->pid);
-			vma = NULL;
-		}
-	}
-
-	if (allocate == 0)
-		goto free_range;
-
-	if (vma == NULL) {
-		pr_err("%d: binder_alloc_buf failed to map pages in userspace, no vma\n",
-			alloc->pid);
-		goto err_no_vma;
-	}
-
-	for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
-		int ret;
-
-		page = &alloc->pages[(page_addr - alloc->buffer) / PAGE_SIZE];
-
-		BUG_ON(*page);
-		*page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
-		if (*page == NULL) {
-			pr_err("%d: binder_alloc_buf failed for page at %pK\n",
-				alloc->pid, page_addr);
-			goto err_alloc_page_failed;
-		}
-		ret = map_kernel_range_noflush((unsigned long)page_addr,
-					PAGE_SIZE, PAGE_KERNEL, page);
-		flush_cache_vmap((unsigned long)page_addr,
-				(unsigned long)page_addr + PAGE_SIZE);
-		if (ret != 1) {
-			pr_err("%d: binder_alloc_buf failed to map page at %pK in kernel\n",
-			       alloc->pid, page_addr);
-			goto err_map_kernel_failed;
-		}
-		user_page_addr =
-			(uintptr_t)page_addr + alloc->user_buffer_offset;
-		ret = vm_insert_page(vma, user_page_addr, page[0]);
-		if (ret) {
-			pr_err("%d: binder_alloc_buf failed to map page at %lx in userspace\n",
-			       alloc->pid, user_page_addr);
-			goto err_vm_insert_page_failed;
-		}
-		/* vm_insert_page does not seem to increment the refcount */
-	}
-	if (mm) {
-		up_write(&mm->mmap_sem);
-		mmput(mm);
-	}
-	return 0;
-
-free_range:
-	for (page_addr = end - PAGE_SIZE; page_addr >= start;
-	     page_addr -= PAGE_SIZE) {
-		page = &alloc->pages[(page_addr - alloc->buffer) / PAGE_SIZE];
-		if (vma)
-			zap_page_range(vma, (uintptr_t)page_addr +
-				alloc->user_buffer_offset, PAGE_SIZE);
-err_vm_insert_page_failed:
-		unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
-err_map_kernel_failed:
-		__free_page(*page);
-		*page = NULL;
-err_alloc_page_failed:
-		;
-	}
-err_no_vma:
-	if (mm) {
-		up_write(&mm->mmap_sem);
-		mmput(mm);
-	}
-	return -ENOMEM;
-}
-
-static struct binder_buffer *binder_alloc_new_buf_locked(
-		struct binder_alloc *alloc, size_t data_size,
-		size_t offsets_size, size_t extra_buffers_size, int is_async)
-{
-	struct rb_node *n = alloc->free_buffers.rb_node;
-	struct binder_buffer *buffer;
-	size_t buffer_size;
-	struct rb_node *best_fit = NULL;
-	void *has_page_addr;
-	void *end_page_addr;
-	size_t size, data_offsets_size;
-
-	if (alloc->vma == NULL) {
-		pr_err("%d: binder_alloc_buf, no vma\n",
-		       alloc->pid);
-		return NULL;
-	}
-
-	data_offsets_size = ALIGN(data_size, sizeof(void *)) +
-		ALIGN(offsets_size, sizeof(void *));
-
-	if (data_offsets_size < data_size || data_offsets_size < offsets_size) {
-		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-				"%d: got transaction with invalid size %zd-%zd\n",
-				alloc->pid, data_size, offsets_size);
-		return NULL;
-	}
-	size = data_offsets_size + ALIGN(extra_buffers_size, sizeof(void *));
-	if (size < data_offsets_size || size < extra_buffers_size) {
-		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-				"%d: got transaction with invalid extra_buffers_size %zd\n",
-				alloc->pid, extra_buffers_size);
-		return NULL;
-	}
-	if (is_async &&
-	    alloc->free_async_space < size + sizeof(struct binder_buffer)) {
-		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-			     "%d: binder_alloc_buf size %zd failed, no async space left\n",
-			      alloc->pid, size);
-		return NULL;
-	}
-
-	while (n) {
-		buffer = rb_entry(n, struct binder_buffer, rb_node);
-		BUG_ON(!buffer->free);
-		buffer_size = binder_alloc_buffer_size(alloc, buffer);
-
-		if (size < buffer_size) {
-			best_fit = n;
-			n = n->rb_left;
-		} else if (size > buffer_size)
-			n = n->rb_right;
-		else {
-			best_fit = n;
-			break;
-		}
-	}
-	if (best_fit == NULL) {
-		pr_err("%d: binder_alloc_buf size %zd failed, no address space\n",
-			alloc->pid, size);
-		return NULL;
-	}
-	if (n == NULL) {
-		buffer = rb_entry(best_fit, struct binder_buffer, rb_node);
-		buffer_size = binder_alloc_buffer_size(alloc, buffer);
-	}
-
-	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-		     "%d: %s size %zd got buffer %pK size %zd\n",
-		      alloc->pid, __func__, size, buffer, buffer_size);
-
-	has_page_addr =
-		(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK);
-	if (n == NULL) {
-		if (size + sizeof(struct binder_buffer) + 4 >= buffer_size)
-			buffer_size = size; /* no room for other buffers */
-		else
-			buffer_size = size + sizeof(struct binder_buffer);
-	}
-	end_page_addr =
-		(void *)PAGE_ALIGN((uintptr_t)buffer->data + buffer_size);
-	if (end_page_addr > has_page_addr)
-		end_page_addr = has_page_addr;
-	if (binder_update_page_range(alloc, 1,
-	    (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr, NULL))
-		return NULL;
-
-	rb_erase(best_fit, &alloc->free_buffers);
-	buffer->free = 0;
-	binder_insert_allocated_buffer(alloc, buffer);
-	if (buffer_size != size) {
-		struct binder_buffer *new_buffer = (void *)buffer->data + size;
-
-		list_add(&new_buffer->entry, &buffer->entry);
-		new_buffer->free = 1;
-		binder_insert_free_buffer(alloc, new_buffer);
-	}
-	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-		     "%d: %s size %zd got %pK\n",
-		      alloc->pid, __func__, size, buffer);
-	buffer->data_size = data_size;
-	buffer->offsets_size = offsets_size;
-	buffer->async_transaction = is_async;
-	buffer->extra_buffers_size = extra_buffers_size;
-	if (is_async) {
-		alloc->free_async_space -= size + sizeof(struct binder_buffer);
-		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC_ASYNC,
-			     "%d: binder_alloc_buf size %zd async free %zd\n",
-			      alloc->pid, size, alloc->free_async_space);
-	}
-
-	return buffer;
-}
-
-struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
-					   size_t data_size,
-					   size_t offsets_size,
-					   size_t extra_buffers_size,
-					   int is_async)
-{
-	struct binder_buffer *buffer;
-
-	mutex_lock(&alloc->mutex);
-	buffer = binder_alloc_new_buf_locked(alloc, data_size, offsets_size,
-					     extra_buffers_size, is_async);
-	mutex_unlock(&alloc->mutex);
-	return buffer;
-}
-
-static void *buffer_start_page(struct binder_buffer *buffer)
-{
-	return (void *)((uintptr_t)buffer & PAGE_MASK);
-}
-
-static void *buffer_end_page(struct binder_buffer *buffer)
-{
-	return (void *)(((uintptr_t)(buffer + 1) - 1) & PAGE_MASK);
-}
-
-static void binder_delete_free_buffer(struct binder_alloc *alloc,
-				      struct binder_buffer *buffer)
-{
-	struct binder_buffer *prev, *next = NULL;
-	int free_page_end = 1;
-	int free_page_start = 1;
-
-	BUG_ON(alloc->buffers.next == &buffer->entry);
-	prev = list_entry(buffer->entry.prev, struct binder_buffer, entry);
-	BUG_ON(!prev->free);
-	if (buffer_end_page(prev) == buffer_start_page(buffer)) {
-		free_page_start = 0;
-		if (buffer_end_page(prev) == buffer_end_page(buffer))
-			free_page_end = 0;
-		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-			     "%d: merge free, buffer %pK share page with %pK\n",
-			      alloc->pid, buffer, prev);
-	}
-
-	if (!list_is_last(&buffer->entry, &alloc->buffers)) {
-		next = list_entry(buffer->entry.next,
-				  struct binder_buffer, entry);
-		if (buffer_start_page(next) == buffer_end_page(buffer)) {
-			free_page_end = 0;
-			if (buffer_start_page(next) ==
-			    buffer_start_page(buffer))
-				free_page_start = 0;
-			binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-				     "%d: merge free, buffer %pK share page with %pK\n",
-				      alloc->pid, buffer, prev);
-		}
-	}
-	list_del(&buffer->entry);
-	if (free_page_start || free_page_end) {
-		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-			     "%d: merge free, buffer %pK do not share page%s%s with %pK or %pK\n",
-			     alloc->pid, buffer, free_page_start ? "" : " end",
-			     free_page_end ? "" : " start", prev, next);
-		binder_update_page_range(alloc, 0, free_page_start ?
-			buffer_start_page(buffer) : buffer_end_page(buffer),
-			(free_page_end ? buffer_end_page(buffer) :
-			buffer_start_page(buffer)) + PAGE_SIZE, NULL);
-	}
-}
-
-static void binder_free_buf_locked(struct binder_alloc *alloc,
-				   struct binder_buffer *buffer)
-{
-	size_t size, buffer_size;
-
-	buffer_size = binder_alloc_buffer_size(alloc, buffer);
-
-	size = ALIGN(buffer->data_size, sizeof(void *)) +
-		ALIGN(buffer->offsets_size, sizeof(void *)) +
-		ALIGN(buffer->extra_buffers_size, sizeof(void *));
-
-	binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-		     "%d: binder_free_buf %pK size %zd buffer_size %zd\n",
-		      alloc->pid, buffer, size, buffer_size);
-
-	BUG_ON(buffer->free);
-	BUG_ON(size > buffer_size);
-	BUG_ON(buffer->transaction != NULL);
-	BUG_ON((void *)buffer < alloc->buffer);
-	BUG_ON((void *)buffer > alloc->buffer + alloc->buffer_size);
-
-	if (buffer->async_transaction) {
-		alloc->free_async_space += size + sizeof(struct binder_buffer);
-
-		binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC_ASYNC,
-			     "%d: binder_free_buf size %zd async free %zd\n",
-			      alloc->pid, size, alloc->free_async_space);
-	}
-
-	binder_update_page_range(alloc, 0,
-		(void *)PAGE_ALIGN((uintptr_t)buffer->data),
-		(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK),
-		NULL);
-
-	rb_erase(&buffer->rb_node, &alloc->allocated_buffers);
-	buffer->free = 1;
-	if (!list_is_last(&buffer->entry, &alloc->buffers)) {
-		struct binder_buffer *next = list_entry(buffer->entry.next,
-						struct binder_buffer, entry);
-
-		if (next->free) {
-			rb_erase(&next->rb_node, &alloc->free_buffers);
-			binder_delete_free_buffer(alloc, next);
-		}
-	}
-	if (alloc->buffers.next != &buffer->entry) {
-		struct binder_buffer *prev = list_entry(buffer->entry.prev,
-						struct binder_buffer, entry);
-
-		if (prev->free) {
-			binder_delete_free_buffer(alloc, buffer);
-			rb_erase(&prev->rb_node, &alloc->free_buffers);
-			buffer = prev;
-		}
-	}
-	binder_insert_free_buffer(alloc, buffer);
-}
-
-void binder_alloc_free_buf(struct binder_alloc *alloc,
-			    struct binder_buffer *buffer)
-{
-	mutex_lock(&alloc->mutex);
-	binder_free_buf_locked(alloc, buffer);
-	mutex_unlock(&alloc->mutex);
-}
-
 static struct binder_node *binder_get_node(struct binder_proc *proc,
 					   binder_uintptr_t ptr)
 {
@@ -3467,12 +2881,6 @@ static void binder_vma_open(struct vm_area_struct *vma)
 		     (unsigned long)pgprot_val(vma->vm_page_prot));
 }
 
-void binder_alloc_vma_close(struct binder_alloc *alloc)
-{
-	WRITE_ONCE(alloc->vma, NULL);
-	WRITE_ONCE(alloc->vma_vm_mm, NULL);
-}
-
 static void binder_vma_close(struct vm_area_struct *vma)
 {
 	struct binder_proc *proc = vma->vm_private_data;
@@ -3497,86 +2905,6 @@ static const struct vm_operations_struct binder_vm_ops = {
 	.fault = binder_vm_fault,
 };
 
-int binder_alloc_mmap_handler(struct binder_alloc *alloc,
-			      struct vm_area_struct *vma)
-{
-	int ret;
-	struct vm_struct *area;
-	const char *failure_string;
-	struct binder_buffer *buffer;
-
-	mutex_lock(&binder_alloc_mmap_lock);
-	if (alloc->buffer) {
-		ret = -EBUSY;
-		failure_string = "already mapped";
-		goto err_already_mapped;
-	}
-
-	area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
-	if (area == NULL) {
-		ret = -ENOMEM;
-		failure_string = "get_vm_area";
-		goto err_get_vm_area_failed;
-	}
-	alloc->buffer = area->addr;
-	alloc->user_buffer_offset =
-			vma->vm_start - (uintptr_t)alloc->buffer;
-	mutex_unlock(&binder_alloc_mmap_lock);
-
-#ifdef CONFIG_CPU_CACHE_VIPT
-	if (cache_is_vipt_aliasing()) {
-		while (CACHE_COLOUR(
-			pr_info("%s: %d %lx-%lx maps %pK bad alignment\n",
-				__func__,
-				alloc->pid, vma->vm_start, vma->vm_end,
-				alloc->buffer);
-			vma->vm_start += PAGE_SIZE;
-		}
-	}
-#endif
-	alloc->pages = kzalloc(sizeof(alloc->pages[0]) *
-				   ((vma->vm_end - vma->vm_start) / PAGE_SIZE),
-			       GFP_KERNEL);
-	if (alloc->pages == NULL) {
-		ret = -ENOMEM;
-		failure_string = "alloc page array";
-		goto err_alloc_pages_failed;
-	}
-	alloc->buffer_size = vma->vm_end - vma->vm_start;
-
-	if (binder_update_page_range(alloc, 1, alloc->buffer,
-				     alloc->buffer + PAGE_SIZE, vma)) {
-		ret = -ENOMEM;
-		failure_string = "alloc small buf";
-		goto err_alloc_small_buf_failed;
-	}
-	buffer = alloc->buffer;
-	INIT_LIST_HEAD(&alloc->buffers);
-	list_add(&buffer->entry, &alloc->buffers);
-	buffer->free = 1;
-	binder_insert_free_buffer(alloc, buffer);
-	alloc->free_async_space = alloc->buffer_size / 2;
-	barrier();
-	alloc->vma = vma;
-	alloc->vma_vm_mm = vma->vm_mm;
-
-	return 0;
-
-err_alloc_small_buf_failed:
-	kfree(alloc->pages);
-	alloc->pages = NULL;
-err_alloc_pages_failed:
-	mutex_lock(&binder_alloc_mmap_lock);
-	vfree(alloc->buffer);
-	alloc->buffer = NULL;
-err_get_vm_area_failed:
-err_already_mapped:
-	mutex_unlock(&binder_alloc_mmap_lock);
-	pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
-	       alloc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
-	return ret;
-}
-
 static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
 {
 	int ret;
@@ -3616,13 +2944,6 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
 	return ret;
 }
 
-void binder_alloc_init(struct binder_alloc *alloc)
-{
-	alloc->tsk = current->group_leader;
-	alloc->pid = current->group_leader->pid;
-	mutex_init(&alloc->mutex);
-}
-
 static int binder_open(struct inode *nodp, struct file *filp)
 {
 	struct binder_proc *proc;
@@ -3758,55 +3079,6 @@ static int binder_node_release(struct binder_node *node, int refs)
 	return refs;
 }
 
-void binder_alloc_deferred_release(struct binder_alloc *alloc)
-{
-	struct rb_node *n;
-	int buffers, page_count;
-
-	BUG_ON(alloc->vma);
-
-	buffers = 0;
-	mutex_lock(&alloc->mutex);
-	while ((n = rb_first(&alloc->allocated_buffers))) {
-		struct binder_buffer *buffer;
-
-		buffer = rb_entry(n, struct binder_buffer, rb_node);
-
-		/* Transaction should already have been freed */
-		BUG_ON(buffer->transaction);
-
-		binder_free_buf_locked(alloc, buffer);
-		buffers++;
-	}
-
-	page_count = 0;
-	if (alloc->pages) {
-		int i;
-
-		for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
-			void *page_addr;
-
-			if (!alloc->pages[i])
-				continue;
-
-			page_addr = alloc->buffer + i * PAGE_SIZE;
-			binder_alloc_debug(BINDER_ALLOC_DEBUG_BUFFER_ALLOC,
-				     "%s: %d: page %d at %pK not freed\n",
-				     __func__, alloc->pid, i, page_addr);
-			unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
-			__free_page(alloc->pages[i]);
-			page_count++;
-		}
-		kfree(alloc->pages);
-		vfree(alloc->buffer);
-	}
-	mutex_unlock(&alloc->mutex);
-
-	binder_alloc_debug(BINDER_ALLOC_DEBUG_OPEN_CLOSE,
-		     "%s: %d buffers %d, pages %d\n",
-		     __func__, alloc->pid, buffers, page_count);
-}
-
 static void binder_deferred_release(struct binder_proc *proc)
 {
 	struct binder_context *context = proc->context;
@@ -4051,27 +3323,6 @@ static void print_binder_ref(struct seq_file *m, struct binder_ref *ref)
 		   ref->node->debug_id, ref->strong, ref->weak, ref->death);
 }
 
-static void print_binder_buffer(struct seq_file *m, const char *prefix,
-				struct binder_buffer *buffer)
-{
-	seq_printf(m, "%s %d: %pK size %zd:%zd %s\n",
-		   prefix, buffer->debug_id, buffer->data,
-		   buffer->data_size, buffer->offsets_size,
-		   buffer->transaction ? "active" : "delivered");
-}
-
-void binder_alloc_print_allocated(struct seq_file *m,
-				  struct binder_alloc *alloc)
-{
-	struct rb_node *n;
-
-	mutex_lock(&alloc->mutex);
-	for (n = rb_first(&alloc->allocated_buffers); n != NULL; n = rb_next(n))
-		print_binder_buffer(m, "  buffer",
-				    rb_entry(n, struct binder_buffer, rb_node));
-	mutex_unlock(&alloc->mutex);
-}
-
 static void print_binder_proc(struct seq_file *m,
 			      struct binder_proc *proc, int print_all)
 {
@@ -4198,18 +3449,6 @@ static void print_binder_stats(struct seq_file *m, const char *prefix,
 	}
 }
 
-int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
-{
-	struct rb_node *n;
-	int count = 0;
-
-	mutex_lock(&alloc->mutex);
-	for (n = rb_first(&alloc->allocated_buffers); n != NULL; n = rb_next(n))
-		count++;
-	mutex_unlock(&alloc->mutex);
-	return count;
-}
-
 static void print_binder_proc_stats(struct seq_file *m,
 				    struct binder_proc *proc)
 {
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
new file mode 100644
index 000000000000..198d04c5d958
--- /dev/null
+++ b/drivers/android/binder_alloc.c
@@ -0,0 +1,759 @@
+/* binder_alloc.c
+ *
+ * Android IPC Subsystem
+ *
+ * Copyright (C) 2007-2017 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <asm/cacheflush.h>
+#include <linux/list.h>
+#include <linux/sched/mm.h>
+#include <linux/module.h>
+#include <linux/rtmutex.h>
+#include <linux/rbtree.h>
+#include <linux/seq_file.h>
+#include <linux/vmalloc.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include "binder_alloc.h"
+#include "binder_trace.h"
+
+static DEFINE_MUTEX(binder_alloc_mmap_lock);
+
+enum {
+	BINDER_DEBUG_OPEN_CLOSE             = 1U << 1,
+	BINDER_DEBUG_BUFFER_ALLOC           = 1U << 2,
+	BINDER_DEBUG_BUFFER_ALLOC_ASYNC     = 1U << 3,
+};
+static uint32_t binder_alloc_debug_mask;
+
+module_param_named(debug_mask, binder_alloc_debug_mask,
+		   uint, 0644);
+
+#define binder_alloc_debug(mask, x...) \
+	do { \
+		if (binder_alloc_debug_mask & mask) \
+			pr_info(x); \
+	} while (0)
+
+static size_t binder_alloc_buffer_size(struct binder_alloc *alloc,
+				       struct binder_buffer *buffer)
+{
+	if (list_is_last(&buffer->entry, &alloc->buffers))
+		return alloc->buffer +
+		       alloc->buffer_size - (void *)buffer->data;
+	return (size_t)list_entry(buffer->entry.next,
+			  struct binder_buffer, entry) - (size_t)buffer->data;
+}
+
+static void binder_insert_free_buffer(struct binder_alloc *alloc,
+				      struct binder_buffer *new_buffer)
+{
+	struct rb_node **p = &alloc->free_buffers.rb_node;
+	struct rb_node *parent = NULL;
+	struct binder_buffer *buffer;
+	size_t buffer_size;
+	size_t new_buffer_size;
+
+	BUG_ON(!new_buffer->free);
+
+	new_buffer_size = binder_alloc_buffer_size(alloc, new_buffer);
+
+	binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+		     "%d: add free buffer, size %zd, at %pK\n",
+		      alloc->pid, new_buffer_size, new_buffer);
+
+	while (*p) {
+		parent = *p;
+		buffer = rb_entry(parent, struct binder_buffer, rb_node);
+		BUG_ON(!buffer->free);
+
+		buffer_size = binder_alloc_buffer_size(alloc, buffer);
+
+		if (new_buffer_size < buffer_size)
+			p = &parent->rb_left;
+		else
+			p = &parent->rb_right;
+	}
+	rb_link_node(&new_buffer->rb_node, parent, p);
+	rb_insert_color(&new_buffer->rb_node, &alloc->free_buffers);
+}
+
+static void binder_insert_allocated_buffer_locked(
+		struct binder_alloc *alloc, struct binder_buffer *new_buffer)
+{
+	struct rb_node **p = &alloc->allocated_buffers.rb_node;
+	struct rb_node *parent = NULL;
+	struct binder_buffer *buffer;
+
+	BUG_ON(new_buffer->free);
+
+	while (*p) {
+		parent = *p;
+		buffer = rb_entry(parent, struct binder_buffer, rb_node);
+		BUG_ON(buffer->free);
+
+		if (new_buffer < buffer)
+			p = &parent->rb_left;
+		else if (new_buffer > buffer)
+			p = &parent->rb_right;
+		else
+			BUG();
+	}
+	rb_link_node(&new_buffer->rb_node, parent, p);
+	rb_insert_color(&new_buffer->rb_node, &alloc->allocated_buffers);
+}
+
+static struct binder_buffer *binder_alloc_buffer_lookup_locked(
+		struct binder_alloc *alloc,
+		uintptr_t user_ptr)
+{
+	struct rb_node *n = alloc->allocated_buffers.rb_node;
+	struct binder_buffer *buffer;
+	struct binder_buffer *kern_ptr;
+
+	kern_ptr = (struct binder_buffer *)(user_ptr - alloc->user_buffer_offset
+		- offsetof(struct binder_buffer, data));
+
+	while (n) {
+		buffer = rb_entry(n, struct binder_buffer, rb_node);
+		BUG_ON(buffer->free);
+
+		if (kern_ptr < buffer)
+			n = n->rb_left;
+		else if (kern_ptr > buffer)
+			n = n->rb_right;
+		else
+			return buffer;
+	}
+	return NULL;
+}
+
+/**
+ * binder_alloc_buffer_lookup() - get buffer given user ptr
+ * @alloc:	binder_alloc for this proc
+ * @user_ptr:	User pointer to buffer data
+ *
+ * Validate userspace pointer to buffer data and return buffer corresponding to
+ * that user pointer. Search the rb tree for buffer that matches user data
+ * pointer.
+ *
+ * Return:	Pointer to buffer or NULL
+ */
+struct binder_buffer *binder_alloc_buffer_lookup(struct binder_alloc *alloc,
+						 uintptr_t user_ptr)
+{
+	struct binder_buffer *buffer;
+
+	mutex_lock(&alloc->mutex);
+	buffer = binder_alloc_buffer_lookup_locked(alloc, user_ptr);
+	mutex_unlock(&alloc->mutex);
+	return buffer;
+}
+
+static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
+				    void *start, void *end,
+				    struct vm_area_struct *vma)
+{
+	void *page_addr;
+	unsigned long user_page_addr;
+	struct page **page;
+	struct mm_struct *mm;
+
+	binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+		     "%d: %s pages %pK-%pK\n", alloc->pid,
+		     allocate ? "allocate" : "free", start, end);
+
+	if (end <= start)
+		return 0;
+
+	trace_binder_update_page_range(alloc, allocate, start, end);
+
+	if (vma)
+		mm = NULL;
+	else
+		mm = get_task_mm(alloc->tsk);
+
+	if (mm) {
+		down_write(&mm->mmap_sem);
+		vma = alloc->vma;
+		if (vma && mm != alloc->vma_vm_mm) {
+			pr_err("%d: vma mm and task mm mismatch\n",
+				alloc->pid);
+			vma = NULL;
+		}
+	}
+
+	if (allocate == 0)
+		goto free_range;
+
+	if (vma == NULL) {
+		pr_err("%d: binder_alloc_buf failed to map pages in userspace, no vma\n",
+			alloc->pid);
+		goto err_no_vma;
+	}
+
+	for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
+		int ret;
+
+		page = &alloc->pages[(page_addr - alloc->buffer) / PAGE_SIZE];
+
+		BUG_ON(*page);
+		*page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
+		if (*page == NULL) {
+			pr_err("%d: binder_alloc_buf failed for page at %pK\n",
+				alloc->pid, page_addr);
+			goto err_alloc_page_failed;
+		}
+		ret = map_kernel_range_noflush((unsigned long)page_addr,
+					PAGE_SIZE, PAGE_KERNEL, page);
+		flush_cache_vmap((unsigned long)page_addr,
+				(unsigned long)page_addr + PAGE_SIZE);
+		if (ret != 1) {
+			pr_err("%d: binder_alloc_buf failed to map page at %pK in kernel\n",
+			       alloc->pid, page_addr);
+			goto err_map_kernel_failed;
+		}
+		user_page_addr =
+			(uintptr_t)page_addr + alloc->user_buffer_offset;
+		ret = vm_insert_page(vma, user_page_addr, page[0]);
+		if (ret) {
+			pr_err("%d: binder_alloc_buf failed to map page at %lx in userspace\n",
+			       alloc->pid, user_page_addr);
+			goto err_vm_insert_page_failed;
+		}
+		/* vm_insert_page does not seem to increment the refcount */
+	}
+	if (mm) {
+		up_write(&mm->mmap_sem);
+		mmput(mm);
+	}
+	return 0;
+
+free_range:
+	for (page_addr = end - PAGE_SIZE; page_addr >= start;
+	     page_addr -= PAGE_SIZE) {
+		page = &alloc->pages[(page_addr - alloc->buffer) / PAGE_SIZE];
+		if (vma)
+			zap_page_range(vma, (uintptr_t)page_addr +
+				alloc->user_buffer_offset, PAGE_SIZE);
+err_vm_insert_page_failed:
+		unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
+err_map_kernel_failed:
+		__free_page(*page);
+		*page = NULL;
+err_alloc_page_failed:
+		;
+	}
+err_no_vma:
+	if (mm) {
+		up_write(&mm->mmap_sem);
+		mmput(mm);
+	}
+	return -ENOMEM;
+}
+
+struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc,
+						  size_t data_size,
+						  size_t offsets_size,
+						  size_t extra_buffers_size,
+						  int is_async)
+{
+	struct rb_node *n = alloc->free_buffers.rb_node;
+	struct binder_buffer *buffer;
+	size_t buffer_size;
+	struct rb_node *best_fit = NULL;
+	void *has_page_addr;
+	void *end_page_addr;
+	size_t size, data_offsets_size;
+
+	if (alloc->vma == NULL) {
+		pr_err("%d: binder_alloc_buf, no vma\n",
+		       alloc->pid);
+		return NULL;
+	}
+
+	data_offsets_size = ALIGN(data_size, sizeof(void *)) +
+		ALIGN(offsets_size, sizeof(void *));
+
+	if (data_offsets_size < data_size || data_offsets_size < offsets_size) {
+		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+				"%d: got transaction with invalid size %zd-%zd\n",
+				alloc->pid, data_size, offsets_size);
+		return NULL;
+	}
+	size = data_offsets_size + ALIGN(extra_buffers_size, sizeof(void *));
+	if (size < data_offsets_size || size < extra_buffers_size) {
+		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+				"%d: got transaction with invalid extra_buffers_size %zd\n",
+				alloc->pid, extra_buffers_size);
+		return NULL;
+	}
+	if (is_async &&
+	    alloc->free_async_space < size + sizeof(struct binder_buffer)) {
+		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+			     "%d: binder_alloc_buf size %zd failed, no async space left\n",
+			      alloc->pid, size);
+		return NULL;
+	}
+
+	while (n) {
+		buffer = rb_entry(n, struct binder_buffer, rb_node);
+		BUG_ON(!buffer->free);
+		buffer_size = binder_alloc_buffer_size(alloc, buffer);
+
+		if (size < buffer_size) {
+			best_fit = n;
+			n = n->rb_left;
+		} else if (size > buffer_size)
+			n = n->rb_right;
+		else {
+			best_fit = n;
+			break;
+		}
+	}
+	if (best_fit == NULL) {
+		pr_err("%d: binder_alloc_buf size %zd failed, no address space\n",
+			alloc->pid, size);
+		return NULL;
+	}
+	if (n == NULL) {
+		buffer = rb_entry(best_fit, struct binder_buffer, rb_node);
+		buffer_size = binder_alloc_buffer_size(alloc, buffer);
+	}
+
+	binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+		     "%d: binder_alloc_buf size %zd got buffer %pK size %zd\n",
+		      alloc->pid, size, buffer, buffer_size);
+
+	has_page_addr =
+		(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK);
+	if (n == NULL) {
+		if (size + sizeof(struct binder_buffer) + 4 >= buffer_size)
+			buffer_size = size; /* no room for other buffers */
+		else
+			buffer_size = size + sizeof(struct binder_buffer);
+	}
+	end_page_addr =
+		(void *)PAGE_ALIGN((uintptr_t)buffer->data + buffer_size);
+	if (end_page_addr > has_page_addr)
+		end_page_addr = has_page_addr;
+	if (binder_update_page_range(alloc, 1,
+	    (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr, NULL))
+		return NULL;
+
+	rb_erase(best_fit, &alloc->free_buffers);
+	buffer->free = 0;
+	binder_insert_allocated_buffer_locked(alloc, buffer);
+	if (buffer_size != size) {
+		struct binder_buffer *new_buffer = (void *)buffer->data + size;
+
+		list_add(&new_buffer->entry, &buffer->entry);
+		new_buffer->free = 1;
+		binder_insert_free_buffer(alloc, new_buffer);
+	}
+	binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+		     "%d: binder_alloc_buf size %zd got %pK\n",
+		      alloc->pid, size, buffer);
+	buffer->data_size = data_size;
+	buffer->offsets_size = offsets_size;
+	buffer->async_transaction = is_async;
+	buffer->extra_buffers_size = extra_buffers_size;
+	if (is_async) {
+		alloc->free_async_space -= size + sizeof(struct binder_buffer);
+		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
+			     "%d: binder_alloc_buf size %zd async free %zd\n",
+			      alloc->pid, size, alloc->free_async_space);
+	}
+	return buffer;
+}
+
+/**
+ * binder_alloc_new_buf() - Allocate a new binder buffer
+ * @alloc:              binder_alloc for this proc
+ * @data_size:          size of user data buffer
+ * @offsets_size:       user specified buffer offset
+ * @extra_buffers_size: size of extra space for meta-data (eg, security context)
+ * @is_async:           buffer for async transaction
+ *
+ * Allocate a new buffer given the requested sizes. Returns
+ * the kernel version of the buffer pointer. The size allocated
+ * is the sum of the three given sizes (each rounded up to
+ * pointer-sized boundary)
+ *
+ * Return:	The allocated buffer or %NULL if error
+ */
+struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
+					   size_t data_size,
+					   size_t offsets_size,
+					   size_t extra_buffers_size,
+					   int is_async)
+{
+	struct binder_buffer *buffer;
+
+	mutex_lock(&alloc->mutex);
+	buffer = binder_alloc_new_buf_locked(alloc, data_size, offsets_size,
+					     extra_buffers_size, is_async);
+	mutex_unlock(&alloc->mutex);
+	return buffer;
+}
+
+static void *buffer_start_page(struct binder_buffer *buffer)
+{
+	return (void *)((uintptr_t)buffer & PAGE_MASK);
+}
+
+static void *buffer_end_page(struct binder_buffer *buffer)
+{
+	return (void *)(((uintptr_t)(buffer + 1) - 1) & PAGE_MASK);
+}
+
+static void binder_delete_free_buffer(struct binder_alloc *alloc,
+				      struct binder_buffer *buffer)
+{
+	struct binder_buffer *prev, *next = NULL;
+	int free_page_end = 1;
+	int free_page_start = 1;
+
+	BUG_ON(alloc->buffers.next == &buffer->entry);
+	prev = list_entry(buffer->entry.prev, struct binder_buffer, entry);
+	BUG_ON(!prev->free);
+	if (buffer_end_page(prev) == buffer_start_page(buffer)) {
+		free_page_start = 0;
+		if (buffer_end_page(prev) == buffer_end_page(buffer))
+			free_page_end = 0;
+		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+			     "%d: merge free, buffer %pK share page with %pK\n",
+			      alloc->pid, buffer, prev);
+	}
+
+	if (!list_is_last(&buffer->entry, &alloc->buffers)) {
+		next = list_entry(buffer->entry.next,
+				  struct binder_buffer, entry);
+		if (buffer_start_page(next) == buffer_end_page(buffer)) {
+			free_page_end = 0;
+			if (buffer_start_page(next) ==
+			    buffer_start_page(buffer))
+				free_page_start = 0;
+			binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+				     "%d: merge free, buffer %pK share page with %pK\n",
+				      alloc->pid, buffer, prev);
+		}
+	}
+	list_del(&buffer->entry);
+	if (free_page_start || free_page_end) {
+		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+			     "%d: merge free, buffer %pK do not share page%s%s with %pK or %pK\n",
+			     alloc->pid, buffer, free_page_start ? "" : " end",
+			     free_page_end ? "" : " start", prev, next);
+		binder_update_page_range(alloc, 0, free_page_start ?
+			buffer_start_page(buffer) : buffer_end_page(buffer),
+			(free_page_end ? buffer_end_page(buffer) :
+			buffer_start_page(buffer)) + PAGE_SIZE, NULL);
+	}
+}
+
+static void binder_free_buf_locked(struct binder_alloc *alloc,
+				   struct binder_buffer *buffer)
+{
+	size_t size, buffer_size;
+
+	buffer_size = binder_alloc_buffer_size(alloc, buffer);
+
+	size = ALIGN(buffer->data_size, sizeof(void *)) +
+		ALIGN(buffer->offsets_size, sizeof(void *)) +
+		ALIGN(buffer->extra_buffers_size, sizeof(void *));
+
+	binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+		     "%d: binder_free_buf %pK size %zd buffer_size %zd\n",
+		      alloc->pid, buffer, size, buffer_size);
+
+	BUG_ON(buffer->free);
+	BUG_ON(size > buffer_size);
+	BUG_ON(buffer->transaction != NULL);
+	BUG_ON((void *)buffer < alloc->buffer);
+	BUG_ON((void *)buffer > alloc->buffer + alloc->buffer_size);
+
+	if (buffer->async_transaction) {
+		alloc->free_async_space += size + sizeof(struct binder_buffer);
+
+		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
+			     "%d: binder_free_buf size %zd async free %zd\n",
+			      alloc->pid, size, alloc->free_async_space);
+	}
+
+	binder_update_page_range(alloc, 0,
+		(void *)PAGE_ALIGN((uintptr_t)buffer->data),
+		(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK),
+		NULL);
+
+	rb_erase(&buffer->rb_node, &alloc->allocated_buffers);
+	buffer->free = 1;
+	if (!list_is_last(&buffer->entry, &alloc->buffers)) {
+		struct binder_buffer *next = list_entry(buffer->entry.next,
+						struct binder_buffer, entry);
+
+		if (next->free) {
+			rb_erase(&next->rb_node, &alloc->free_buffers);
+			binder_delete_free_buffer(alloc, next);
+		}
+	}
+	if (alloc->buffers.next != &buffer->entry) {
+		struct binder_buffer *prev = list_entry(buffer->entry.prev,
+						struct binder_buffer, entry);
+
+		if (prev->free) {
+			binder_delete_free_buffer(alloc, buffer);
+			rb_erase(&prev->rb_node, &alloc->free_buffers);
+			buffer = prev;
+		}
+	}
+	binder_insert_free_buffer(alloc, buffer);
+}
+
+/**
+ * binder_alloc_free_buf() - free a binder buffer
+ * @alloc:	binder_alloc for this proc
+ * @buffer:	kernel pointer to buffer
+ *
+ * Free the buffer allocated via binder_alloc_new_buffer()
+ */
+void binder_alloc_free_buf(struct binder_alloc *alloc,
+			    struct binder_buffer *buffer)
+{
+	mutex_lock(&alloc->mutex);
+	binder_free_buf_locked(alloc, buffer);
+	mutex_unlock(&alloc->mutex);
+}
+
+/**
+ * binder_alloc_mmap_handler() - map virtual address space for proc
+ * @alloc:	alloc structure for this proc
+ * @vma:	vma passed to mmap()
+ *
+ * Called by binder_mmap() to initialize the space specified in
+ * vma for allocating binder buffers
+ *
+ * Return:
+ *      0 = success
+ *      -EBUSY = address space already mapped
+ *      -ENOMEM = failed to map memory to given address space
+ */
+int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+			      struct vm_area_struct *vma)
+{
+	int ret;
+	struct vm_struct *area;
+	const char *failure_string;
+	struct binder_buffer *buffer;
+
+	mutex_lock(&binder_alloc_mmap_lock);
+	if (alloc->buffer) {
+		ret = -EBUSY;
+		failure_string = "already mapped";
+		goto err_already_mapped;
+	}
+
+	area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
+	if (area == NULL) {
+		ret = -ENOMEM;
+		failure_string = "get_vm_area";
+		goto err_get_vm_area_failed;
+	}
+	alloc->buffer = area->addr;
+	alloc->user_buffer_offset =
+		vma->vm_start - (uintptr_t)alloc->buffer;
+	mutex_unlock(&binder_alloc_mmap_lock);
+
+#ifdef CONFIG_CPU_CACHE_VIPT
+	if (cache_is_vipt_aliasing()) {
+		while (CACHE_COLOUR(
+				(vma->vm_start ^ (uint32_t)alloc->buffer))) {
+			pr_info("%s: %d %lx-%lx maps %pK bad alignment\n",
+				__func__, alloc->pid, vma->vm_start,
+				vma->vm_end, alloc->buffer);
+			vma->vm_start += PAGE_SIZE;
+		}
+	}
+#endif
+	alloc->pages = kzalloc(sizeof(alloc->pages[0]) *
+				   ((vma->vm_end - vma->vm_start) / PAGE_SIZE),
+			       GFP_KERNEL);
+	if (alloc->pages == NULL) {
+		ret = -ENOMEM;
+		failure_string = "alloc page array";
+		goto err_alloc_pages_failed;
+	}
+	alloc->buffer_size = vma->vm_end - vma->vm_start;
+
+	if (binder_update_page_range(alloc, 1, alloc->buffer,
+				     alloc->buffer + PAGE_SIZE, vma)) {
+		ret = -ENOMEM;
+		failure_string = "alloc small buf";
+		goto err_alloc_small_buf_failed;
+	}
+	buffer = alloc->buffer;
+	INIT_LIST_HEAD(&alloc->buffers);
+	list_add(&buffer->entry, &alloc->buffers);
+	buffer->free = 1;
+	binder_insert_free_buffer(alloc, buffer);
+	alloc->free_async_space = alloc->buffer_size / 2;
+	barrier();
+	alloc->vma = vma;
+	alloc->vma_vm_mm = vma->vm_mm;
+
+	return 0;
+
+err_alloc_small_buf_failed:
+	kfree(alloc->pages);
+	alloc->pages = NULL;
+err_alloc_pages_failed:
+	mutex_lock(&binder_alloc_mmap_lock);
+	vfree(alloc->buffer);
+	alloc->buffer = NULL;
+err_get_vm_area_failed:
+err_already_mapped:
+	mutex_unlock(&binder_alloc_mmap_lock);
+	pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
+	       alloc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
+	return ret;
+}
+
+
+void binder_alloc_deferred_release(struct binder_alloc *alloc)
+{
+	struct rb_node *n;
+	int buffers, page_count;
+
+	BUG_ON(alloc->vma);
+
+	buffers = 0;
+	mutex_lock(&alloc->mutex);
+	while ((n = rb_first(&alloc->allocated_buffers))) {
+		struct binder_buffer *buffer;
+
+		buffer = rb_entry(n, struct binder_buffer, rb_node);
+
+		/* Transaction should already have been freed */
+		BUG_ON(buffer->transaction);
+
+		binder_free_buf_locked(alloc, buffer);
+		buffers++;
+	}
+
+	page_count = 0;
+	if (alloc->pages) {
+		int i;
+
+		for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
+			void *page_addr;
+
+			if (!alloc->pages[i])
+				continue;
+
+			page_addr = alloc->buffer + i * PAGE_SIZE;
+			binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+				     "%s: %d: page %d at %pK not freed\n",
+				     __func__, alloc->pid, i, page_addr);
+			unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
+			__free_page(alloc->pages[i]);
+			page_count++;
+		}
+		kfree(alloc->pages);
+		vfree(alloc->buffer);
+	}
+	mutex_unlock(&alloc->mutex);
+
+	binder_alloc_debug(BINDER_DEBUG_OPEN_CLOSE,
+		     "%s: %d buffers %d, pages %d\n",
+		     __func__, alloc->pid, buffers, page_count);
+}
+
+static void print_binder_buffer(struct seq_file *m, const char *prefix,
+				struct binder_buffer *buffer)
+{
+	seq_printf(m, "%s %d: %pK size %zd:%zd %s\n",
+		   prefix, buffer->debug_id, buffer->data,
+		   buffer->data_size, buffer->offsets_size,
+		   buffer->transaction ? "active" : "delivered");
+}
+
+/**
+ * binder_alloc_print_allocated() - print buffer info
+ * @m:     seq_file for output via seq_printf()
+ * @alloc: binder_alloc for this proc
+ *
+ * Prints information about every buffer associated with
+ * the binder_alloc state to the given seq_file
+ */
+void binder_alloc_print_allocated(struct seq_file *m,
+				  struct binder_alloc *alloc)
+{
+	struct rb_node *n;
+
+	mutex_lock(&alloc->mutex);
+	for (n = rb_first(&alloc->allocated_buffers); n != NULL; n = rb_next(n))
+		print_binder_buffer(m, "  buffer",
+				    rb_entry(n, struct binder_buffer, rb_node));
+	mutex_unlock(&alloc->mutex);
+}
+
+/**
+ * binder_alloc_get_allocated_count() - return count of buffers
+ * @alloc: binder_alloc for this proc
+ *
+ * Return: count of allocated buffers
+ */
+int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
+{
+	struct rb_node *n;
+	int count = 0;
+
+	mutex_lock(&alloc->mutex);
+	for (n = rb_first(&alloc->allocated_buffers); n != NULL; n = rb_next(n))
+		count++;
+	mutex_unlock(&alloc->mutex);
+	return count;
+}
+
+
+/**
+ * binder_alloc_vma_close() - invalidate address space
+ * @alloc: binder_alloc for this proc
+ *
+ * Called from binder_vma_close() when releasing address space.
+ * Clears alloc->vma to prevent new incoming transactions from
+ * allocating more buffers.
+ */
+void binder_alloc_vma_close(struct binder_alloc *alloc)
+{
+	WRITE_ONCE(alloc->vma, NULL);
+	WRITE_ONCE(alloc->vma_vm_mm, NULL);
+}
+
+/**
+ * binder_alloc_init() - called by binder_open() for per-proc initialization
+ * @alloc: binder_alloc for this proc
+ *
+ * Called from binder_open() to initialize binder_alloc fields for
+ * new binder proc
+ */
+void binder_alloc_init(struct binder_alloc *alloc)
+{
+	alloc->tsk = current->group_leader;
+	alloc->pid = current->group_leader->pid;
+	mutex_init(&alloc->mutex);
+}
+
diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
new file mode 100644
index 000000000000..721c511431f9
--- /dev/null
+++ b/drivers/android/binder_alloc.h
@@ -0,0 +1,162 @@
+/*
+ * Copyright (C) 2017 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _LINUX_BINDER_ALLOC_H
+#define _LINUX_BINDER_ALLOC_H
+
+#include <linux/rbtree.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/rtmutex.h>
+#include <linux/vmalloc.h>
+#include <linux/slab.h>
+
+struct binder_transaction;
+
+/**
+ * struct binder_buffer - buffer used for binder transactions
+ * @entry:              entry alloc->buffers
+ * @rb_node:            node for allocated_buffers/free_buffers rb trees
+ * @free:               true if buffer is free
+ * @allow_user_free:    describe the second member of struct blah,
+ * @async_transaction:  describe the second member of struct blah,
+ * @debug_id:           describe the second member of struct blah,
+ * @transaction:        describe the second member of struct blah,
+ * @target_node:        describe the second member of struct blah,
+ * @data_size:          describe the second member of struct blah,
+ * @offsets_size:       describe the second member of struct blah,
+ * @extra_buffers_size: describe the second member of struct blah,
+ * @data:i              describe the second member of struct blah,
+ *
+ * Bookkeeping structure for binder transaction buffers
+ */
+struct binder_buffer {
+	struct list_head entry; /* free and allocated entries by address */
+	struct rb_node rb_node; /* free entry by size or allocated entry */
+				/* by address */
+	unsigned free:1;
+	unsigned allow_user_free:1;
+	unsigned async_transaction:1;
+	unsigned debug_id:29;
+
+	struct binder_transaction *transaction;
+
+	struct binder_node *target_node;
+	size_t data_size;
+	size_t offsets_size;
+	size_t extra_buffers_size;
+	uint8_t data[0];
+};
+
+/**
+ * struct binder_alloc - per-binder proc state for binder allocator
+ * @vma:                vm_area_struct passed to mmap_handler
+ *                      (invarient after mmap)
+ * @tsk:                tid for task that called init for this proc
+ *                      (invariant after init)
+ * @vma_vm_mm:          copy of vma->vm_mm (invarient after mmap)
+ * @buffer:             base of per-proc address space mapped via mmap
+ * @user_buffer_offset: offset between user and kernel VAs for buffer
+ * @buffers:            list of all buffers for this proc
+ * @free_buffers:       rb tree of buffers available for allocation
+ *                      sorted by size
+ * @allocated_buffers:  rb tree of allocated buffers sorted by address
+ * @free_async_space:   VA space available for async buffers. This is
+ *                      initialized at mmap time to 1/2 the full VA space
+ * @pages:              array of physical page addresses for each
+ *                      page of mmap'd space
+ * @buffer_size:        size of address space specified via mmap
+ * @pid:                pid for associated binder_proc (invariant after init)
+ *
+ * Bookkeeping structure for per-proc address space management for binder
+ * buffers. It is normally initialized during binder_init() and binder_mmap()
+ * calls. The address space is used for both user-visible buffers and for
+ * struct binder_buffer objects used to track the user buffers
+ */
+struct binder_alloc {
+	struct mutex mutex;
+	struct task_struct *tsk;
+	struct vm_area_struct *vma;
+	struct mm_struct *vma_vm_mm;
+	void *buffer;
+	ptrdiff_t user_buffer_offset;
+	struct list_head buffers;
+	struct rb_root free_buffers;
+	struct rb_root allocated_buffers;
+	size_t free_async_space;
+	struct page **pages;
+	size_t buffer_size;
+	uint32_t buffer_free;
+	int pid;
+};
+
+extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
+						  size_t data_size,
+						  size_t offsets_size,
+						  size_t extra_buffers_size,
+						  int is_async);
+extern void binder_alloc_init(struct binder_alloc *alloc);
+extern void binder_alloc_vma_close(struct binder_alloc *alloc);
+extern struct binder_buffer *
+binder_alloc_buffer_lookup(struct binder_alloc *alloc,
+			   uintptr_t user_ptr);
+extern void binder_alloc_free_buf(struct binder_alloc *alloc,
+				  struct binder_buffer *buffer);
+extern int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+				     struct vm_area_struct *vma);
+extern void binder_alloc_deferred_release(struct binder_alloc *alloc);
+extern int binder_alloc_get_allocated_count(struct binder_alloc *alloc);
+extern void binder_alloc_print_allocated(struct seq_file *m,
+					 struct binder_alloc *alloc);
+
+/**
+ * binder_alloc_get_free_async_space() - get free space available for async
+ * @alloc:	binder_alloc for this proc
+ *
+ * Return:	the bytes remaining in the address-space for async transactions
+ */
+static inline size_t
+binder_alloc_get_free_async_space(struct binder_alloc *alloc)
+{
+	size_t free_async_space;
+
+	mutex_lock(&alloc->mutex);
+	free_async_space = alloc->free_async_space;
+	mutex_unlock(&alloc->mutex);
+	return free_async_space;
+}
+
+/**
+ * binder_alloc_get_user_buffer_offset() - get offset between kernel/user addrs
+ * @alloc:	binder_alloc for this proc
+ *
+ * Return:	the offset between kernel and user-space addresses to use for
+ * virtual address conversion
+ */
+static inline ptrdiff_t
+binder_alloc_get_user_buffer_offset(struct binder_alloc *alloc)
+{
+	/*
+	 * user_buffer_offset is constant if vma is set and
+	 * undefined if vma is not set. It is possible to
+	 * get here with !alloc->vma if the target process
+	 * is dying while a transaction is being initiated.
+	 * Returning the old value is ok in this case and
+	 * the transaction will fail.
+	 */
+	return alloc->user_buffer_offset;
+}
+
+#endif /* _LINUX_BINDER_ALLOC_H */
+
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 08/37] binder: remove binder_debug_no_lock mechanism
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (6 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 07/37] binder: move binder_alloc to separate file Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 09/37] binder: add protection for non-perf cases Todd Kjos
                   ` (29 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

With the global lock, there was a mechanism to access
binder driver debugging information with the global
lock disabled to debug deadlocks or other issues.
This mechanism is rarely (if ever) used anymore
and wasn't needed during the development of
fine-grained locking in the binder driver.
Removing it.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 31 ++++++++-----------------------
 1 file changed, 8 insertions(+), 23 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 0512971cfc53..10fda7ab9fa5 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -107,9 +107,6 @@ static uint32_t binder_debug_mask = BINDER_DEBUG_USER_ERROR |
 	BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION;
 module_param_named(debug_mask, binder_debug_mask, uint, S_IWUSR | S_IRUGO);
 
-static bool binder_debug_no_lock;
-module_param_named(proc_no_lock, binder_debug_no_lock, bool, S_IWUSR | S_IRUGO);
-
 static char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES;
 module_param_named(devices, binder_devices_param, charp, 0444);
 
@@ -3507,10 +3504,8 @@ static int binder_state_show(struct seq_file *m, void *unused)
 {
 	struct binder_proc *proc;
 	struct binder_node *node;
-	int do_lock = !binder_debug_no_lock;
 
-	if (do_lock)
-		binder_lock(__func__);
+	binder_lock(__func__);
 
 	seq_puts(m, "binder state:\n");
 
@@ -3521,18 +3516,15 @@ static int binder_state_show(struct seq_file *m, void *unused)
 
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
 		print_binder_proc(m, proc, 1);
-	if (do_lock)
-		binder_unlock(__func__);
+	binder_unlock(__func__);
 	return 0;
 }
 
 static int binder_stats_show(struct seq_file *m, void *unused)
 {
 	struct binder_proc *proc;
-	int do_lock = !binder_debug_no_lock;
 
-	if (do_lock)
-		binder_lock(__func__);
+	binder_lock(__func__);
 
 	seq_puts(m, "binder stats:\n");
 
@@ -3540,24 +3532,20 @@ static int binder_stats_show(struct seq_file *m, void *unused)
 
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
 		print_binder_proc_stats(m, proc);
-	if (do_lock)
-		binder_unlock(__func__);
+	binder_unlock(__func__);
 	return 0;
 }
 
 static int binder_transactions_show(struct seq_file *m, void *unused)
 {
 	struct binder_proc *proc;
-	int do_lock = !binder_debug_no_lock;
 
-	if (do_lock)
-		binder_lock(__func__);
+	binder_lock(__func__);
 
 	seq_puts(m, "binder transactions:\n");
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
 		print_binder_proc(m, proc, 0);
-	if (do_lock)
-		binder_unlock(__func__);
+	binder_unlock(__func__);
 	return 0;
 }
 
@@ -3565,10 +3553,8 @@ static int binder_proc_show(struct seq_file *m, void *unused)
 {
 	struct binder_proc *itr;
 	int pid = (unsigned long)m->private;
-	int do_lock = !binder_debug_no_lock;
 
-	if (do_lock)
-		binder_lock(__func__);
+	binder_lock(__func__);
 
 	hlist_for_each_entry(itr, &binder_procs, proc_node) {
 		if (itr->pid == pid) {
@@ -3576,8 +3562,7 @@ static int binder_proc_show(struct seq_file *m, void *unused)
 			print_binder_proc(m, itr, 1);
 		}
 	}
-	if (do_lock)
-		binder_unlock(__func__);
+	binder_unlock(__func__);
 	return 0;
 }
 
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 09/37] binder: add protection for non-perf cases
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (7 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 08/37] binder: remove binder_debug_no_lock mechanism Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 10/37] binder: change binder_stats to atomics Todd Kjos
                   ` (28 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Add binder_dead_nodes_lock, binder_procs_lock, and
binder_context_mgr_node_lock to protect the associated global lists

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 81 +++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 63 insertions(+), 18 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 10fda7ab9fa5..fb484c6acd3e 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -46,12 +46,16 @@
 #include "binder_trace.h"
 
 static DEFINE_MUTEX(binder_main_lock);
+
+static HLIST_HEAD(binder_deferred_list);
 static DEFINE_MUTEX(binder_deferred_lock);
 
 static HLIST_HEAD(binder_devices);
 static HLIST_HEAD(binder_procs);
-static HLIST_HEAD(binder_deferred_list);
+static DEFINE_MUTEX(binder_procs_lock);
+
 static HLIST_HEAD(binder_dead_nodes);
+static DEFINE_SPINLOCK(binder_dead_nodes_lock);
 
 static struct dentry *binder_debugfs_dir_entry_root;
 static struct dentry *binder_debugfs_dir_entry_proc;
@@ -219,6 +223,8 @@ static struct binder_transaction_log_entry *binder_transaction_log_add(
 
 struct binder_context {
 	struct binder_node *binder_context_mgr_node;
+	struct mutex context_mgr_node_lock;
+
 	kuid_t binder_context_mgr_uid;
 	const char *name;
 };
@@ -570,7 +576,9 @@ static int binder_dec_node(struct binder_node *node, int strong, int internal)
 					     "refless node %d deleted\n",
 					     node->debug_id);
 			} else {
+				spin_lock(&binder_dead_nodes_lock);
 				hlist_del(&node->dead_node);
+				spin_unlock(&binder_dead_nodes_lock);
 				binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 					     "dead node %d deleted\n",
 					     node->debug_id);
@@ -1454,11 +1462,14 @@ static void binder_transaction(struct binder_proc *proc,
 			}
 			target_node = ref->node;
 		} else {
+			mutex_lock(&context->context_mgr_node_lock);
 			target_node = context->binder_context_mgr_node;
 			if (target_node == NULL) {
 				return_error = BR_DEAD_REPLY;
+				mutex_unlock(&context->context_mgr_node_lock);
 				goto err_no_context_mgr_node;
 			}
+			mutex_unlock(&context->context_mgr_node_lock);
 		}
 		e->to_node = target_node->debug_id;
 		target_proc = target_node->proc;
@@ -1824,22 +1835,31 @@ static int binder_thread_write(struct binder_proc *proc,
 		case BC_RELEASE:
 		case BC_DECREFS: {
 			uint32_t target;
-			struct binder_ref *ref;
+			struct binder_ref *ref = NULL;
 			const char *debug_string;
 
 			if (get_user(target, (uint32_t __user *)ptr))
 				return -EFAULT;
+
 			ptr += sizeof(uint32_t);
-			if (target == 0 && context->binder_context_mgr_node &&
+			if (target == 0 &&
 			    (cmd == BC_INCREFS || cmd == BC_ACQUIRE)) {
-				ref = binder_get_ref_for_node(proc,
-					context->binder_context_mgr_node);
-				if (ref->desc != target) {
-					binder_user_error("%d:%d tried to acquire reference to desc 0, got %d instead\n",
-						proc->pid, thread->pid,
-						ref->desc);
+				struct binder_node *ctx_mgr_node;
+
+				mutex_lock(&context->context_mgr_node_lock);
+				ctx_mgr_node = context->binder_context_mgr_node;
+				if (ctx_mgr_node) {
+					ref = binder_get_ref_for_node(proc,
+							ctx_mgr_node);
+					if (ref && ref->desc != target) {
+						binder_user_error("%d:%d tried to acquire reference to desc 0, got %d instead\n",
+							proc->pid, thread->pid,
+							ref->desc);
+					}
 				}
-			} else
+				mutex_unlock(&context->context_mgr_node_lock);
+			}
+			if (ref == NULL)
 				ref = binder_get_ref(proc, target,
 						     cmd == BC_ACQUIRE ||
 						     cmd == BC_RELEASE);
@@ -2753,9 +2773,10 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp)
 	int ret = 0;
 	struct binder_proc *proc = filp->private_data;
 	struct binder_context *context = proc->context;
-
+	struct binder_node *new_node;
 	kuid_t curr_euid = current_euid();
 
+	mutex_lock(&context->context_mgr_node_lock);
 	if (context->binder_context_mgr_node) {
 		pr_err("BINDER_SET_CONTEXT_MGR already set\n");
 		ret = -EBUSY;
@@ -2776,16 +2797,18 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp)
 	} else {
 		context->binder_context_mgr_uid = curr_euid;
 	}
-	context->binder_context_mgr_node = binder_new_node(proc, 0, 0);
-	if (!context->binder_context_mgr_node) {
+	new_node = binder_new_node(proc, 0, 0);
+	if (!new_node) {
 		ret = -ENOMEM;
 		goto out;
 	}
-	context->binder_context_mgr_node->local_weak_refs++;
-	context->binder_context_mgr_node->local_strong_refs++;
-	context->binder_context_mgr_node->has_strong_ref = 1;
-	context->binder_context_mgr_node->has_weak_ref = 1;
+	new_node->local_weak_refs++;
+	new_node->local_strong_refs++;
+	new_node->has_strong_ref = 1;
+	new_node->has_weak_ref = 1;
+	context->binder_context_mgr_node = new_node;
 out:
+	mutex_unlock(&context->context_mgr_node_lock);
 	return ret;
 }
 
@@ -2965,13 +2988,16 @@ static int binder_open(struct inode *nodp, struct file *filp)
 	binder_lock(__func__);
 
 	binder_stats_created(BINDER_STAT_PROC);
-	hlist_add_head(&proc->proc_node, &binder_procs);
 	proc->pid = current->group_leader->pid;
 	INIT_LIST_HEAD(&proc->delivered_death);
 	filp->private_data = proc;
 
 	binder_unlock(__func__);
 
+	mutex_lock(&binder_procs_lock);
+	hlist_add_head(&proc->proc_node, &binder_procs);
+	mutex_unlock(&binder_procs_lock);
+
 	if (binder_debugfs_dir_entry_proc) {
 		char strbuf[11];
 
@@ -3050,7 +3076,10 @@ static int binder_node_release(struct binder_node *node, int refs)
 	node->proc = NULL;
 	node->local_strong_refs = 0;
 	node->local_weak_refs = 0;
+
+	spin_lock(&binder_dead_nodes_lock);
 	hlist_add_head(&node->dead_node, &binder_dead_nodes);
+	spin_unlock(&binder_dead_nodes_lock);
 
 	hlist_for_each_entry(ref, &node->refs, node_entry) {
 		refs++;
@@ -3084,8 +3113,11 @@ static void binder_deferred_release(struct binder_proc *proc)
 
 	BUG_ON(proc->files);
 
+	mutex_lock(&binder_procs_lock);
 	hlist_del(&proc->proc_node);
+	mutex_unlock(&binder_procs_lock);
 
+	mutex_lock(&context->context_mgr_node_lock);
 	if (context->binder_context_mgr_node &&
 	    context->binder_context_mgr_node->proc == proc) {
 		binder_debug(BINDER_DEBUG_DEAD_BINDER,
@@ -3093,6 +3125,7 @@ static void binder_deferred_release(struct binder_proc *proc)
 			     __func__, proc->pid);
 		context->binder_context_mgr_node = NULL;
 	}
+	mutex_unlock(&context->context_mgr_node_lock);
 
 	threads = 0;
 	active_transactions = 0;
@@ -3509,13 +3542,17 @@ static int binder_state_show(struct seq_file *m, void *unused)
 
 	seq_puts(m, "binder state:\n");
 
+	spin_lock(&binder_dead_nodes_lock);
 	if (!hlist_empty(&binder_dead_nodes))
 		seq_puts(m, "dead nodes:\n");
 	hlist_for_each_entry(node, &binder_dead_nodes, dead_node)
 		print_binder_node(m, node);
+	spin_unlock(&binder_dead_nodes_lock);
 
+	mutex_lock(&binder_procs_lock);
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
 		print_binder_proc(m, proc, 1);
+	mutex_unlock(&binder_procs_lock);
 	binder_unlock(__func__);
 	return 0;
 }
@@ -3530,8 +3567,10 @@ static int binder_stats_show(struct seq_file *m, void *unused)
 
 	print_binder_stats(m, "", &binder_stats);
 
+	mutex_lock(&binder_procs_lock);
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
 		print_binder_proc_stats(m, proc);
+	mutex_unlock(&binder_procs_lock);
 	binder_unlock(__func__);
 	return 0;
 }
@@ -3543,8 +3582,10 @@ static int binder_transactions_show(struct seq_file *m, void *unused)
 	binder_lock(__func__);
 
 	seq_puts(m, "binder transactions:\n");
+	mutex_lock(&binder_procs_lock);
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
 		print_binder_proc(m, proc, 0);
+	mutex_unlock(&binder_procs_lock);
 	binder_unlock(__func__);
 	return 0;
 }
@@ -3556,12 +3597,15 @@ static int binder_proc_show(struct seq_file *m, void *unused)
 
 	binder_lock(__func__);
 
+	mutex_lock(&binder_procs_lock);
 	hlist_for_each_entry(itr, &binder_procs, proc_node) {
 		if (itr->pid == pid) {
 			seq_puts(m, "binder proc state:\n");
 			print_binder_proc(m, itr, 1);
 		}
 	}
+	mutex_unlock(&binder_procs_lock);
+
 	binder_unlock(__func__);
 	return 0;
 }
@@ -3622,6 +3666,7 @@ static int __init init_binder_device(const char *name)
 
 	binder_device->context.binder_context_mgr_uid = INVALID_UID;
 	binder_device->context.name = name;
+	mutex_init(&binder_device->context.context_mgr_node_lock);
 
 	ret = misc_register(&binder_device->miscdev);
 	if (ret < 0) {
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 10/37] binder: change binder_stats to atomics
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (8 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 09/37] binder: add protection for non-perf cases Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 11/37] binder: make binder_last_id an atomic Todd Kjos
                   ` (27 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos; +Cc: Badhri Jagan Sridharan

From: Badhri Jagan Sridharan <Badhri@google.com>

Use atomics for stats to avoid needing to lock for
increments/decrements

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 48 ++++++++++++++++++++++++++++--------------------
 1 file changed, 28 insertions(+), 20 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index fb484c6acd3e..cd4191d027e1 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -167,22 +167,22 @@ enum binder_stat_types {
 };
 
 struct binder_stats {
-	int br[_IOC_NR(BR_FAILED_REPLY) + 1];
-	int bc[_IOC_NR(BC_REPLY_SG) + 1];
-	int obj_created[BINDER_STAT_COUNT];
-	int obj_deleted[BINDER_STAT_COUNT];
+	atomic_t br[_IOC_NR(BR_FAILED_REPLY) + 1];
+	atomic_t bc[_IOC_NR(BC_REPLY_SG) + 1];
+	atomic_t obj_created[BINDER_STAT_COUNT];
+	atomic_t obj_deleted[BINDER_STAT_COUNT];
 };
 
 static struct binder_stats binder_stats;
 
 static inline void binder_stats_deleted(enum binder_stat_types type)
 {
-	binder_stats.obj_deleted[type]++;
+	atomic_inc(&binder_stats.obj_deleted[type]);
 }
 
 static inline void binder_stats_created(enum binder_stat_types type)
 {
-	binder_stats.obj_created[type]++;
+	atomic_inc(&binder_stats.obj_created[type]);
 }
 
 struct binder_transaction_log_entry {
@@ -1825,9 +1825,9 @@ static int binder_thread_write(struct binder_proc *proc,
 		ptr += sizeof(uint32_t);
 		trace_binder_command(cmd);
 		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
-			binder_stats.bc[_IOC_NR(cmd)]++;
-			proc->stats.bc[_IOC_NR(cmd)]++;
-			thread->stats.bc[_IOC_NR(cmd)]++;
+			atomic_inc(&binder_stats.bc[_IOC_NR(cmd)]);
+			atomic_inc(&proc->stats.bc[_IOC_NR(cmd)]);
+			atomic_inc(&thread->stats.bc[_IOC_NR(cmd)]);
 		}
 		switch (cmd) {
 		case BC_INCREFS:
@@ -2201,9 +2201,9 @@ static void binder_stat_br(struct binder_proc *proc,
 {
 	trace_binder_return(cmd);
 	if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.br)) {
-		binder_stats.br[_IOC_NR(cmd)]++;
-		proc->stats.br[_IOC_NR(cmd)]++;
-		thread->stats.br[_IOC_NR(cmd)]++;
+		atomic_inc(&binder_stats.br[_IOC_NR(cmd)]);
+		atomic_inc(&proc->stats.br[_IOC_NR(cmd)]);
+		atomic_inc(&thread->stats.br[_IOC_NR(cmd)]);
 	}
 }
 
@@ -3453,17 +3453,21 @@ static void print_binder_stats(struct seq_file *m, const char *prefix,
 	BUILD_BUG_ON(ARRAY_SIZE(stats->bc) !=
 		     ARRAY_SIZE(binder_command_strings));
 	for (i = 0; i < ARRAY_SIZE(stats->bc); i++) {
-		if (stats->bc[i])
+		int temp = atomic_read(&stats->bc[i]);
+
+		if (temp)
 			seq_printf(m, "%s%s: %d\n", prefix,
-				   binder_command_strings[i], stats->bc[i]);
+				   binder_command_strings[i], temp);
 	}
 
 	BUILD_BUG_ON(ARRAY_SIZE(stats->br) !=
 		     ARRAY_SIZE(binder_return_strings));
 	for (i = 0; i < ARRAY_SIZE(stats->br); i++) {
-		if (stats->br[i])
+		int temp = atomic_read(&stats->br[i]);
+
+		if (temp)
 			seq_printf(m, "%s%s: %d\n", prefix,
-				   binder_return_strings[i], stats->br[i]);
+				   binder_return_strings[i], temp);
 	}
 
 	BUILD_BUG_ON(ARRAY_SIZE(stats->obj_created) !=
@@ -3471,11 +3475,15 @@ static void print_binder_stats(struct seq_file *m, const char *prefix,
 	BUILD_BUG_ON(ARRAY_SIZE(stats->obj_created) !=
 		     ARRAY_SIZE(stats->obj_deleted));
 	for (i = 0; i < ARRAY_SIZE(stats->obj_created); i++) {
-		if (stats->obj_created[i] || stats->obj_deleted[i])
-			seq_printf(m, "%s%s: active %d total %d\n", prefix,
+		int created = atomic_read(&stats->obj_created[i]);
+		int deleted = atomic_read(&stats->obj_deleted[i]);
+
+		if (created || deleted)
+			seq_printf(m, "%s%s: active %d total %d\n",
+				prefix,
 				binder_objstat_strings[i],
-				stats->obj_created[i] - stats->obj_deleted[i],
-				stats->obj_created[i]);
+				created - deleted,
+				created);
 	}
 }
 
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 11/37] binder: make binder_last_id an atomic
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (9 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 10/37] binder: change binder_stats to atomics Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 12/37] binder: add log information for binder transaction failures Todd Kjos
                   ` (26 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Use an atomic for binder_last_id to avoid locking it

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index cd4191d027e1..25f30d81c7d0 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -59,7 +59,7 @@ static DEFINE_SPINLOCK(binder_dead_nodes_lock);
 
 static struct dentry *binder_debugfs_dir_entry_root;
 static struct dentry *binder_debugfs_dir_entry_proc;
-static int binder_last_id;
+static atomic_t binder_last_id;
 
 #define BINDER_DEBUG_ENTRY(name) \
 static int binder_##name##_open(struct inode *inode, struct file *file) \
@@ -496,7 +496,7 @@ static struct binder_node *binder_new_node(struct binder_proc *proc,
 	binder_stats_created(BINDER_STAT_NODE);
 	rb_link_node(&node->rb_node, parent, p);
 	rb_insert_color(&node->rb_node, &proc->nodes);
-	node->debug_id = ++binder_last_id;
+	node->debug_id = atomic_inc_return(&binder_last_id);
 	node->proc = proc;
 	node->ptr = ptr;
 	node->cookie = cookie;
@@ -639,7 +639,7 @@ static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
 	if (new_ref == NULL)
 		return NULL;
 	binder_stats_created(BINDER_STAT_REF);
-	new_ref->debug_id = ++binder_last_id;
+	new_ref->debug_id = atomic_inc_return(&binder_last_id);
 	new_ref->proc = proc;
 	new_ref->node = node;
 	rb_link_node(&new_ref->rb_node_node, parent, p);
@@ -1527,7 +1527,7 @@ static void binder_transaction(struct binder_proc *proc,
 	}
 	binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);
 
-	t->debug_id = ++binder_last_id;
+	t->debug_id = atomic_inc_return(&binder_last_id);
 	e->debug_id = t->debug_id;
 
 	if (reply)
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 12/37] binder: add log information for binder transaction failures
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (10 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 11/37] binder: make binder_last_id an atomic Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 13/37] binder: refactor queue management in binder_thread_read Todd Kjos
                   ` (25 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Add additional information to determine the cause of binder
failures. Adds the following to failed transaction log and
kernel messages:
	return_error : value returned for transaction
	return_error_param : errno returned by binder allocator
	return_error_line : line number where error detected

Also, return BR_DEAD_REPLY if an allocation error indicates
a dead proc (-ESRCH)

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c       | 87 +++++++++++++++++++++++++++++++++++++-----
 drivers/android/binder_alloc.c | 20 +++++-----
 2 files changed, 88 insertions(+), 19 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 25f30d81c7d0..62ac0c41b8a6 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -196,6 +196,9 @@ struct binder_transaction_log_entry {
 	int to_node;
 	int data_size;
 	int offsets_size;
+	int return_error_line;
+	uint32_t return_error;
+	uint32_t return_error_param;
 	const char *context_name;
 };
 struct binder_transaction_log {
@@ -1142,7 +1145,7 @@ static int binder_translate_binder(struct flat_binder_object *fp,
 
 	ref = binder_get_ref_for_node(target_proc, node);
 	if (!ref)
-		return -EINVAL;
+		return -ENOMEM;
 
 	if (fp->hdr.type == BINDER_TYPE_BINDER)
 		fp->hdr.type = BINDER_TYPE_HANDLE;
@@ -1199,7 +1202,7 @@ static int binder_translate_handle(struct flat_binder_object *fp,
 
 		new_ref = binder_get_ref_for_node(target_proc, ref->node);
 		if (!new_ref)
-			return -EINVAL;
+			return -ENOMEM;
 
 		fp->binder = 0;
 		fp->handle = new_ref->desc;
@@ -1397,7 +1400,9 @@ static void binder_transaction(struct binder_proc *proc,
 	wait_queue_head_t *target_wait;
 	struct binder_transaction *in_reply_to = NULL;
 	struct binder_transaction_log_entry *e;
-	uint32_t return_error;
+	uint32_t return_error = 0;
+	uint32_t return_error_param = 0;
+	uint32_t return_error_line = 0;
 	struct binder_buffer_object *last_fixup_obj = NULL;
 	binder_size_t last_fixup_min_off = 0;
 	struct binder_context *context = proc->context;
@@ -1417,6 +1422,8 @@ static void binder_transaction(struct binder_proc *proc,
 			binder_user_error("%d:%d got reply transaction with no transaction stack\n",
 					  proc->pid, thread->pid);
 			return_error = BR_FAILED_REPLY;
+			return_error_param = -EPROTO;
+			return_error_line = __LINE__;
 			goto err_empty_call_stack;
 		}
 		binder_set_nice(in_reply_to->saved_priority);
@@ -1428,6 +1435,8 @@ static void binder_transaction(struct binder_proc *proc,
 				in_reply_to->to_thread ?
 				in_reply_to->to_thread->pid : 0);
 			return_error = BR_FAILED_REPLY;
+			return_error_param = -EPROTO;
+			return_error_line = __LINE__;
 			in_reply_to = NULL;
 			goto err_bad_call_stack;
 		}
@@ -1435,6 +1444,7 @@ static void binder_transaction(struct binder_proc *proc,
 		target_thread = in_reply_to->from;
 		if (target_thread == NULL) {
 			return_error = BR_DEAD_REPLY;
+			return_error_line = __LINE__;
 			goto err_dead_binder;
 		}
 		if (target_thread->transaction_stack != in_reply_to) {
@@ -1444,6 +1454,8 @@ static void binder_transaction(struct binder_proc *proc,
 				target_thread->transaction_stack->debug_id : 0,
 				in_reply_to->debug_id);
 			return_error = BR_FAILED_REPLY;
+			return_error_param = -EPROTO;
+			return_error_line = __LINE__;
 			in_reply_to = NULL;
 			target_thread = NULL;
 			goto err_dead_binder;
@@ -1458,6 +1470,8 @@ static void binder_transaction(struct binder_proc *proc,
 				binder_user_error("%d:%d got transaction to invalid handle\n",
 					proc->pid, thread->pid);
 				return_error = BR_FAILED_REPLY;
+				return_error_param = -EINVAL;
+				return_error_line = __LINE__;
 				goto err_invalid_target_handle;
 			}
 			target_node = ref->node;
@@ -1467,6 +1481,7 @@ static void binder_transaction(struct binder_proc *proc,
 			if (target_node == NULL) {
 				return_error = BR_DEAD_REPLY;
 				mutex_unlock(&context->context_mgr_node_lock);
+				return_error_line = __LINE__;
 				goto err_no_context_mgr_node;
 			}
 			mutex_unlock(&context->context_mgr_node_lock);
@@ -1475,11 +1490,14 @@ static void binder_transaction(struct binder_proc *proc,
 		target_proc = target_node->proc;
 		if (target_proc == NULL) {
 			return_error = BR_DEAD_REPLY;
+			return_error_line = __LINE__;
 			goto err_dead_binder;
 		}
 		if (security_binder_transaction(proc->tsk,
 						target_proc->tsk) < 0) {
 			return_error = BR_FAILED_REPLY;
+			return_error_param = -EPERM;
+			return_error_line = __LINE__;
 			goto err_invalid_target_handle;
 		}
 		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
@@ -1493,6 +1511,8 @@ static void binder_transaction(struct binder_proc *proc,
 					tmp->to_thread ?
 					tmp->to_thread->pid : 0);
 				return_error = BR_FAILED_REPLY;
+				return_error_param = -EPROTO;
+				return_error_line = __LINE__;
 				goto err_bad_call_stack;
 			}
 			while (tmp) {
@@ -1516,6 +1536,8 @@ static void binder_transaction(struct binder_proc *proc,
 	t = kzalloc(sizeof(*t), GFP_KERNEL);
 	if (t == NULL) {
 		return_error = BR_FAILED_REPLY;
+		return_error_param = -ENOMEM;
+		return_error_line = __LINE__;
 		goto err_alloc_t_failed;
 	}
 	binder_stats_created(BINDER_STAT_TRANSACTION);
@@ -1523,6 +1545,8 @@ static void binder_transaction(struct binder_proc *proc,
 	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
 	if (tcomplete == NULL) {
 		return_error = BR_FAILED_REPLY;
+		return_error_param = -ENOMEM;
+		return_error_line = __LINE__;
 		goto err_alloc_tcomplete_failed;
 	}
 	binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);
@@ -1565,8 +1589,15 @@ static void binder_transaction(struct binder_proc *proc,
 	t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
 		tr->offsets_size, extra_buffers_size,
 		!reply && (t->flags & TF_ONE_WAY));
-	if (t->buffer == NULL) {
-		return_error = BR_FAILED_REPLY;
+	if (IS_ERR(t->buffer)) {
+		/*
+		 * -ESRCH indicates VMA cleared. The target is dying.
+		 */
+		return_error_param = PTR_ERR(t->buffer);
+		return_error = return_error_param == -ESRCH ?
+			BR_DEAD_REPLY : BR_FAILED_REPLY;
+		return_error_line = __LINE__;
+		t->buffer = NULL;
 		goto err_binder_alloc_buf_failed;
 	}
 	t->buffer->allow_user_free = 0;
@@ -1586,6 +1617,8 @@ static void binder_transaction(struct binder_proc *proc,
 		binder_user_error("%d:%d got transaction with invalid data ptr\n",
 				proc->pid, thread->pid);
 		return_error = BR_FAILED_REPLY;
+		return_error_param = -EFAULT;
+		return_error_line = __LINE__;
 		goto err_copy_data_failed;
 	}
 	if (copy_from_user(offp, (const void __user *)(uintptr_t)
@@ -1593,12 +1626,16 @@ static void binder_transaction(struct binder_proc *proc,
 		binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
 				proc->pid, thread->pid);
 		return_error = BR_FAILED_REPLY;
+		return_error_param = -EFAULT;
+		return_error_line = __LINE__;
 		goto err_copy_data_failed;
 	}
 	if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
 		binder_user_error("%d:%d got transaction with invalid offsets size, %lld\n",
 				proc->pid, thread->pid, (u64)tr->offsets_size);
 		return_error = BR_FAILED_REPLY;
+		return_error_param = -EINVAL;
+		return_error_line = __LINE__;
 		goto err_bad_offset;
 	}
 	if (!IS_ALIGNED(extra_buffers_size, sizeof(u64))) {
@@ -1606,6 +1643,8 @@ static void binder_transaction(struct binder_proc *proc,
 				  proc->pid, thread->pid,
 				  (u64)extra_buffers_size);
 		return_error = BR_FAILED_REPLY;
+		return_error_param = -EINVAL;
+		return_error_line = __LINE__;
 		goto err_bad_offset;
 	}
 	off_end = (void *)off_start + tr->offsets_size;
@@ -1622,6 +1661,8 @@ static void binder_transaction(struct binder_proc *proc,
 					  (u64)off_min,
 					  (u64)t->buffer->data_size);
 			return_error = BR_FAILED_REPLY;
+			return_error_param = -EINVAL;
+			return_error_line = __LINE__;
 			goto err_bad_offset;
 		}
 
@@ -1636,6 +1677,8 @@ static void binder_transaction(struct binder_proc *proc,
 			ret = binder_translate_binder(fp, t, thread);
 			if (ret < 0) {
 				return_error = BR_FAILED_REPLY;
+				return_error_param = ret;
+				return_error_line = __LINE__;
 				goto err_translate_failed;
 			}
 		} break;
@@ -1647,6 +1690,8 @@ static void binder_transaction(struct binder_proc *proc,
 			ret = binder_translate_handle(fp, t, thread);
 			if (ret < 0) {
 				return_error = BR_FAILED_REPLY;
+				return_error_param = ret;
+				return_error_line = __LINE__;
 				goto err_translate_failed;
 			}
 		} break;
@@ -1658,6 +1703,8 @@ static void binder_transaction(struct binder_proc *proc,
 
 			if (target_fd < 0) {
 				return_error = BR_FAILED_REPLY;
+				return_error_param = target_fd;
+				return_error_line = __LINE__;
 				goto err_translate_failed;
 			}
 			fp->pad_binder = 0;
@@ -1674,6 +1721,8 @@ static void binder_transaction(struct binder_proc *proc,
 				binder_user_error("%d:%d got transaction with invalid parent offset or type\n",
 						  proc->pid, thread->pid);
 				return_error = BR_FAILED_REPLY;
+				return_error_param = -EINVAL;
+				return_error_line = __LINE__;
 				goto err_bad_parent;
 			}
 			if (!binder_validate_fixup(t->buffer, off_start,
@@ -1683,12 +1732,16 @@ static void binder_transaction(struct binder_proc *proc,
 				binder_user_error("%d:%d got transaction with out-of-order buffer fixup\n",
 						  proc->pid, thread->pid);
 				return_error = BR_FAILED_REPLY;
+				return_error_param = -EINVAL;
+				return_error_line = __LINE__;
 				goto err_bad_parent;
 			}
 			ret = binder_translate_fd_array(fda, parent, t, thread,
 							in_reply_to);
 			if (ret < 0) {
 				return_error = BR_FAILED_REPLY;
+				return_error_param = ret;
+				return_error_line = __LINE__;
 				goto err_translate_failed;
 			}
 			last_fixup_obj = parent;
@@ -1704,6 +1757,8 @@ static void binder_transaction(struct binder_proc *proc,
 				binder_user_error("%d:%d got transaction with too large buffer\n",
 						  proc->pid, thread->pid);
 				return_error = BR_FAILED_REPLY;
+				return_error_param = -EINVAL;
+				return_error_line = __LINE__;
 				goto err_bad_offset;
 			}
 			if (copy_from_user(sg_bufp,
@@ -1711,7 +1766,9 @@ static void binder_transaction(struct binder_proc *proc,
 					   bp->buffer, bp->length)) {
 				binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
 						  proc->pid, thread->pid);
+				return_error_param = -EFAULT;
 				return_error = BR_FAILED_REPLY;
+				return_error_line = __LINE__;
 				goto err_copy_data_failed;
 			}
 			/* Fixup buffer pointer to target proc address space */
@@ -1726,6 +1783,8 @@ static void binder_transaction(struct binder_proc *proc,
 						  last_fixup_min_off);
 			if (ret < 0) {
 				return_error = BR_FAILED_REPLY;
+				return_error_param = ret;
+				return_error_line = __LINE__;
 				goto err_translate_failed;
 			}
 			last_fixup_obj = bp;
@@ -1735,6 +1794,8 @@ static void binder_transaction(struct binder_proc *proc,
 			binder_user_error("%d:%d got transaction with invalid object type, %x\n",
 				proc->pid, thread->pid, hdr->type);
 			return_error = BR_FAILED_REPLY;
+			return_error_param = -EINVAL;
+			return_error_line = __LINE__;
 			goto err_bad_object_type;
 		}
 	}
@@ -1789,13 +1850,17 @@ static void binder_transaction(struct binder_proc *proc,
 err_invalid_target_handle:
 err_no_context_mgr_node:
 	binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
-		     "%d:%d transaction failed %d, size %lld-%lld\n",
-		     proc->pid, thread->pid, return_error,
-		     (u64)tr->data_size, (u64)tr->offsets_size);
+		     "%d:%d transaction failed %d/%d, size %lld-%lld line %d\n",
+		     proc->pid, thread->pid, return_error, return_error_param,
+		     (u64)tr->data_size, (u64)tr->offsets_size,
+		     return_error_line);
 
 	{
 		struct binder_transaction_log_entry *fe;
 
+		e->return_error = return_error;
+		e->return_error_param = return_error_param;
+		e->return_error_line = return_error_line;
 		fe = binder_transaction_log_add(&binder_transaction_log_failed);
 		*fe = *e;
 	}
@@ -3622,11 +3687,13 @@ static void print_binder_transaction_log_entry(struct seq_file *m,
 					struct binder_transaction_log_entry *e)
 {
 	seq_printf(m,
-		   "%d: %s from %d:%d to %d:%d context %s node %d handle %d size %d:%d\n",
+		   "%d: %s from %d:%d to %d:%d context %s node %d handle %d size %d:%d ret %d/%d l=%d\n",
 		   e->debug_id, (e->call_type == 2) ? "reply" :
 		   ((e->call_type == 1) ? "async" : "call "), e->from_proc,
 		   e->from_thread, e->to_proc, e->to_thread, e->context_name,
-		   e->to_node, e->target_handle, e->data_size, e->offsets_size);
+		   e->to_node, e->target_handle, e->data_size, e->offsets_size,
+		   e->return_error, e->return_error_param,
+		   e->return_error_line);
 }
 
 static int binder_transaction_log_show(struct seq_file *m, void *unused)
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 198d04c5d958..a0af1419cc79 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -262,7 +262,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
 		up_write(&mm->mmap_sem);
 		mmput(mm);
 	}
-	return -ENOMEM;
+	return vma ? -ENOMEM : -ESRCH;
 }
 
 struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc,
@@ -278,11 +278,12 @@ struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc,
 	void *has_page_addr;
 	void *end_page_addr;
 	size_t size, data_offsets_size;
+	int ret;
 
 	if (alloc->vma == NULL) {
 		pr_err("%d: binder_alloc_buf, no vma\n",
 		       alloc->pid);
-		return NULL;
+		return ERR_PTR(-ESRCH);
 	}
 
 	data_offsets_size = ALIGN(data_size, sizeof(void *)) +
@@ -292,21 +293,21 @@ struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc,
 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
 				"%d: got transaction with invalid size %zd-%zd\n",
 				alloc->pid, data_size, offsets_size);
-		return NULL;
+		return ERR_PTR(-EINVAL);
 	}
 	size = data_offsets_size + ALIGN(extra_buffers_size, sizeof(void *));
 	if (size < data_offsets_size || size < extra_buffers_size) {
 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
 				"%d: got transaction with invalid extra_buffers_size %zd\n",
 				alloc->pid, extra_buffers_size);
-		return NULL;
+		return ERR_PTR(-EINVAL);
 	}
 	if (is_async &&
 	    alloc->free_async_space < size + sizeof(struct binder_buffer)) {
 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
 			     "%d: binder_alloc_buf size %zd failed, no async space left\n",
 			      alloc->pid, size);
-		return NULL;
+		return ERR_PTR(-ENOSPC);
 	}
 
 	while (n) {
@@ -327,7 +328,7 @@ struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc,
 	if (best_fit == NULL) {
 		pr_err("%d: binder_alloc_buf size %zd failed, no address space\n",
 			alloc->pid, size);
-		return NULL;
+		return ERR_PTR(-ENOSPC);
 	}
 	if (n == NULL) {
 		buffer = rb_entry(best_fit, struct binder_buffer, rb_node);
@@ -350,9 +351,10 @@ struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc,
 		(void *)PAGE_ALIGN((uintptr_t)buffer->data + buffer_size);
 	if (end_page_addr > has_page_addr)
 		end_page_addr = has_page_addr;
-	if (binder_update_page_range(alloc, 1,
-	    (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr, NULL))
-		return NULL;
+	ret = binder_update_page_range(alloc, 1,
+	    (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr, NULL);
+	if (ret)
+		return ERR_PTR(ret);
 
 	rb_erase(best_fit, &alloc->free_buffers);
 	buffer->free = 0;
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 13/37] binder: refactor queue management in binder_thread_read
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (11 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 12/37] binder: add log information for binder transaction failures Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 14/37] binder: avoid race conditions when enqueuing txn Todd Kjos
                   ` (24 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

In binder_thread_read, the BINDER_WORK_NODE command is used
to communicate the references on the node to userspace. It
can take a couple of iterations in the loop to construct
the list of commands for user space. When locking is added,
the lock would need to be release on each iteration which
means the state could change. The work item is not dequeued
during this process which prevents a simpler queue management
that can just dequeue up front and handle the work item.

Fixed by changing the BINDER_WORK_NODE algorithm in
binder_thread_read to determine which commands to send
to userspace atomically in 1 pass so it stays consistent
with the kernel view.

The work item is now dequeued immediately since only
1 pass is needed.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 151 +++++++++++++++++++++++++++++------------------
 1 file changed, 94 insertions(+), 57 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 62ac0c41b8a6..f17d1dfa5b02 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -2285,6 +2285,37 @@ static int binder_has_thread_work(struct binder_thread *thread)
 		(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN);
 }
 
+static int binder_put_node_cmd(struct binder_proc *proc,
+			       struct binder_thread *thread,
+			       void __user **ptrp,
+			       binder_uintptr_t node_ptr,
+			       binder_uintptr_t node_cookie,
+			       int node_debug_id,
+			       uint32_t cmd, const char *cmd_name)
+{
+	void __user *ptr = *ptrp;
+
+	if (put_user(cmd, (uint32_t __user *)ptr))
+		return -EFAULT;
+	ptr += sizeof(uint32_t);
+
+	if (put_user(node_ptr, (binder_uintptr_t __user *)ptr))
+		return -EFAULT;
+	ptr += sizeof(binder_uintptr_t);
+
+	if (put_user(node_cookie, (binder_uintptr_t __user *)ptr))
+		return -EFAULT;
+	ptr += sizeof(binder_uintptr_t);
+
+	binder_stat_br(proc, thread, cmd);
+	binder_debug(BINDER_DEBUG_USER_REFS, "%d:%d %s %d u%016llx c%016llx\n",
+		     proc->pid, thread->pid, cmd_name, node_debug_id,
+		     (u64)node_ptr, (u64)node_cookie);
+
+	*ptrp = ptr;
+	return 0;
+}
+
 static int binder_thread_read(struct binder_proc *proc,
 			      struct binder_thread *thread,
 			      binder_uintptr_t binder_buffer, size_t size,
@@ -2410,72 +2441,78 @@ static int binder_thread_read(struct binder_proc *proc,
 		} break;
 		case BINDER_WORK_NODE: {
 			struct binder_node *node = container_of(w, struct binder_node, work);
-			uint32_t cmd = BR_NOOP;
-			const char *cmd_name;
-			int strong = node->internal_strong_refs || node->local_strong_refs;
-			int weak = !hlist_empty(&node->refs) || node->local_weak_refs || strong;
-
-			if (weak && !node->has_weak_ref) {
-				cmd = BR_INCREFS;
-				cmd_name = "BR_INCREFS";
+			int strong, weak;
+			binder_uintptr_t node_ptr = node->ptr;
+			binder_uintptr_t node_cookie = node->cookie;
+			int node_debug_id = node->debug_id;
+			int has_weak_ref;
+			int has_strong_ref;
+			void __user *orig_ptr = ptr;
+
+			BUG_ON(proc != node->proc);
+			strong = node->internal_strong_refs ||
+					node->local_strong_refs;
+			weak = !hlist_empty(&node->refs) ||
+					node->local_weak_refs || strong;
+			has_strong_ref = node->has_strong_ref;
+			has_weak_ref = node->has_weak_ref;
+
+			if (weak && !has_weak_ref) {
 				node->has_weak_ref = 1;
 				node->pending_weak_ref = 1;
 				node->local_weak_refs++;
-			} else if (strong && !node->has_strong_ref) {
-				cmd = BR_ACQUIRE;
-				cmd_name = "BR_ACQUIRE";
+			}
+			if (strong && !has_strong_ref) {
 				node->has_strong_ref = 1;
 				node->pending_strong_ref = 1;
 				node->local_strong_refs++;
-			} else if (!strong && node->has_strong_ref) {
-				cmd = BR_RELEASE;
-				cmd_name = "BR_RELEASE";
+			}
+			if (!strong && has_strong_ref)
 				node->has_strong_ref = 0;
-			} else if (!weak && node->has_weak_ref) {
-				cmd = BR_DECREFS;
-				cmd_name = "BR_DECREFS";
+			if (!weak && has_weak_ref)
 				node->has_weak_ref = 0;
+			list_del(&w->entry);
+
+			if (!weak && !strong) {
+				binder_debug(BINDER_DEBUG_INTERNAL_REFS,
+					     "%d:%d node %d u%016llx c%016llx deleted\n",
+					     proc->pid, thread->pid,
+					     node_debug_id,
+					     (u64)node_ptr,
+					     (u64)node_cookie);
+				rb_erase(&node->rb_node, &proc->nodes);
+				kfree(node);
+				binder_stats_deleted(BINDER_STAT_NODE);
 			}
-			if (cmd != BR_NOOP) {
-				if (put_user(cmd, (uint32_t __user *)ptr))
-					return -EFAULT;
-				ptr += sizeof(uint32_t);
-				if (put_user(node->ptr,
-					     (binder_uintptr_t __user *)ptr))
-					return -EFAULT;
-				ptr += sizeof(binder_uintptr_t);
-				if (put_user(node->cookie,
-					     (binder_uintptr_t __user *)ptr))
-					return -EFAULT;
-				ptr += sizeof(binder_uintptr_t);
-
-				binder_stat_br(proc, thread, cmd);
-				binder_debug(BINDER_DEBUG_USER_REFS,
-					     "%d:%d %s %d u%016llx c%016llx\n",
-					     proc->pid, thread->pid, cmd_name,
-					     node->debug_id,
-					     (u64)node->ptr, (u64)node->cookie);
-			} else {
-				list_del_init(&w->entry);
-				if (!weak && !strong) {
-					binder_debug(BINDER_DEBUG_INTERNAL_REFS,
-						     "%d:%d node %d u%016llx c%016llx deleted\n",
-						     proc->pid, thread->pid,
-						     node->debug_id,
-						     (u64)node->ptr,
-						     (u64)node->cookie);
-					rb_erase(&node->rb_node, &proc->nodes);
-					kfree(node);
-					binder_stats_deleted(BINDER_STAT_NODE);
-				} else {
-					binder_debug(BINDER_DEBUG_INTERNAL_REFS,
-						     "%d:%d node %d u%016llx c%016llx state unchanged\n",
-						     proc->pid, thread->pid,
-						     node->debug_id,
-						     (u64)node->ptr,
-						     (u64)node->cookie);
-				}
-			}
+			if (weak && !has_weak_ref)
+				ret = binder_put_node_cmd(
+						proc, thread, &ptr, node_ptr,
+						node_cookie, node_debug_id,
+						BR_INCREFS, "BR_INCREFS");
+			if (!ret && strong && !has_strong_ref)
+				ret = binder_put_node_cmd(
+						proc, thread, &ptr, node_ptr,
+						node_cookie, node_debug_id,
+						BR_ACQUIRE, "BR_ACQUIRE");
+			if (!ret && !strong && has_strong_ref)
+				ret = binder_put_node_cmd(
+						proc, thread, &ptr, node_ptr,
+						node_cookie, node_debug_id,
+						BR_RELEASE, "BR_RELEASE");
+			if (!ret && !weak && has_weak_ref)
+				ret = binder_put_node_cmd(
+						proc, thread, &ptr, node_ptr,
+						node_cookie, node_debug_id,
+						BR_DECREFS, "BR_DECREFS");
+			if (orig_ptr == ptr)
+				binder_debug(BINDER_DEBUG_INTERNAL_REFS,
+					     "%d:%d node %d u%016llx c%016llx state unchanged\n",
+					     proc->pid, thread->pid,
+					     node_debug_id,
+					     (u64)node_ptr,
+					     (u64)node_cookie);
+			if (ret)
+				return ret;
 		} break;
 		case BINDER_WORK_DEAD_BINDER:
 		case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 14/37] binder: avoid race conditions when enqueuing txn
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (12 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 13/37] binder: refactor queue management in binder_thread_read Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 15/37] binder: don't modify thread->looper from other threads Todd Kjos
                   ` (23 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Currently, the transaction complete work item is queued
after the transaction. This means that it is possible
for the transaction to be handled and a reply to be
enqueued in the current thread before the transaction
complete is enqueued, which violates the protocol
with userspace who may not expect the transaction
complete. Fixed by always enqueing the transaction
complete first.

Also, once the transaction is enqueued, it is unsafe
to access since it might be freed. Currently,
t->flags is accessed to determine whether a sync
wake is needed. Changed to access tr->flags
instead.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index f17d1dfa5b02..71faf548482d 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -1799,6 +1799,9 @@ static void binder_transaction(struct binder_proc *proc,
 			goto err_bad_object_type;
 		}
 	}
+	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
+	list_add_tail(&tcomplete->entry, &thread->todo);
+
 	if (reply) {
 		BUG_ON(t->buffer->async_transaction != 0);
 		binder_pop_transaction(target_thread, in_reply_to);
@@ -1818,10 +1821,8 @@ static void binder_transaction(struct binder_proc *proc,
 	}
 	t->work.type = BINDER_WORK_TRANSACTION;
 	list_add_tail(&t->work.entry, target_list);
-	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
-	list_add_tail(&tcomplete->entry, &thread->todo);
 	if (target_wait) {
-		if (reply || !(t->flags & TF_ONE_WAY))
+		if (reply || !(tr->flags & TF_ONE_WAY))
 			wake_up_interruptible_sync(target_wait);
 		else
 			wake_up_interruptible(target_wait);
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 15/37] binder: don't modify thread->looper from other threads
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (13 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 14/37] binder: avoid race conditions when enqueuing txn Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 16/37] binder: remove dead code in binder_get_ref_for_node Todd Kjos
                   ` (22 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

The looper member of struct binder_thread is a bitmask
of control bits. All of the existing bits are modified
by the affected thread except for BINDER_LOOPER_STATE_NEED_RETURN
which can be modified in binder_deferred_flush() by
another thread.

To avoid adding a spinlock around all read-mod-writes to
modify a bit, the BINDER_LOOPER_STATE_NEED_RETURN flag
is replaced by a separate field in struct binder_thread.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 71faf548482d..3c1129d91825 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -334,14 +334,14 @@ enum {
 	BINDER_LOOPER_STATE_EXITED      = 0x04,
 	BINDER_LOOPER_STATE_INVALID     = 0x08,
 	BINDER_LOOPER_STATE_WAITING     = 0x10,
-	BINDER_LOOPER_STATE_NEED_RETURN = 0x20
 };
 
 struct binder_thread {
 	struct binder_proc *proc;
 	struct rb_node rb_node;
 	int pid;
-	int looper;
+	int looper;              /* only modified by this thread */
+	bool looper_need_return; /* can be written by other thread */
 	struct binder_transaction *transaction_stack;
 	struct list_head todo;
 	uint32_t return_error; /* Write failed, return error code in read buf */
@@ -2276,14 +2276,13 @@ static void binder_stat_br(struct binder_proc *proc,
 static int binder_has_proc_work(struct binder_proc *proc,
 				struct binder_thread *thread)
 {
-	return !list_empty(&proc->todo) ||
-		(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN);
+	return !list_empty(&proc->todo) || thread->looper_need_return;
 }
 
 static int binder_has_thread_work(struct binder_thread *thread)
 {
 	return !list_empty(&thread->todo) || thread->return_error != BR_OK ||
-		(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN);
+		thread->looper_need_return;
 }
 
 static int binder_put_node_cmd(struct binder_proc *proc,
@@ -2412,8 +2411,7 @@ static int binder_thread_read(struct binder_proc *proc,
 					     entry);
 		} else {
 			/* no data added */
-			if (ptr - buffer == 4 &&
-			    !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN))
+			if (ptr - buffer == 4 && !thread->looper_need_return)
 				goto retry;
 			break;
 		}
@@ -2727,7 +2725,7 @@ static struct binder_thread *binder_get_thread(struct binder_proc *proc)
 		INIT_LIST_HEAD(&thread->todo);
 		rb_link_node(&thread->rb_node, parent, p);
 		rb_insert_color(&thread->rb_node, &proc->threads);
-		thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;
+		thread->looper_need_return = true;
 		thread->return_error = BR_OK;
 		thread->return_error2 = BR_OK;
 	}
@@ -2983,7 +2981,7 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 	ret = 0;
 err:
 	if (thread)
-		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
+		thread->looper_need_return = false;
 	binder_unlock(__func__);
 	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
 	if (ret && ret != -ERESTARTSYS)
@@ -3138,7 +3136,7 @@ static void binder_deferred_flush(struct binder_proc *proc)
 	for (n = rb_first(&proc->threads); n != NULL; n = rb_next(n)) {
 		struct binder_thread *thread = rb_entry(n, struct binder_thread, rb_node);
 
-		thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;
+		thread->looper_need_return = true;
 		if (thread->looper & BINDER_LOOPER_STATE_WAITING) {
 			wake_up_interruptible(&thread->wait);
 			wake_count++;
@@ -3399,7 +3397,9 @@ static void print_binder_thread(struct seq_file *m,
 	size_t start_pos = m->count;
 	size_t header_pos;
 
-	seq_printf(m, "  thread %d: l %02x\n", thread->pid, thread->looper);
+	seq_printf(m, "  thread %d: l %02x need_return %d\n",
+			thread->pid, thread->looper,
+			thread->looper_need_return);
 	header_pos = m->count;
 	t = thread->transaction_stack;
 	while (t) {
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 16/37] binder: remove dead code in binder_get_ref_for_node
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (14 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 15/37] binder: don't modify thread->looper from other threads Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 17/37] binder: protect against two threads freeing buffer Todd Kjos
                   ` (21 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

node is always non-NULL in binder_get_ref_for_node so the
conditional and else clause are not needed

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 16 +++++-----------
 1 file changed, 5 insertions(+), 11 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 3c1129d91825..3bbfb2455b70 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -670,18 +670,12 @@ static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
 	}
 	rb_link_node(&new_ref->rb_node_desc, parent, p);
 	rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc);
-	if (node) {
-		hlist_add_head(&new_ref->node_entry, &node->refs);
+	hlist_add_head(&new_ref->node_entry, &node->refs);
 
-		binder_debug(BINDER_DEBUG_INTERNAL_REFS,
-			     "%d new ref %d desc %d for node %d\n",
-			      proc->pid, new_ref->debug_id, new_ref->desc,
-			      node->debug_id);
-	} else {
-		binder_debug(BINDER_DEBUG_INTERNAL_REFS,
-			     "%d new ref %d desc %d for dead node\n",
-			      proc->pid, new_ref->debug_id, new_ref->desc);
-	}
+	binder_debug(BINDER_DEBUG_INTERNAL_REFS,
+		     "%d new ref %d desc %d for node %d\n",
+		      proc->pid, new_ref->debug_id, new_ref->desc,
+		      node->debug_id);
 	return new_ref;
 }
 
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 17/37] binder: protect against two threads freeing buffer
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (15 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 16/37] binder: remove dead code in binder_get_ref_for_node Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 18/37] binder: add more debug info when allocation fails Todd Kjos
                   ` (20 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Adds protection against malicious user code freeing
the same buffer at the same time which could cause
a crash. Cannot happen under normal use.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c       |  4 ++--
 drivers/android/binder_alloc.c | 22 +++++++++++++++++-----
 drivers/android/binder_alloc.h |  7 ++++---
 3 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 3bbfb2455b70..a1912a22c89c 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -2024,8 +2024,8 @@ static int binder_thread_write(struct binder_proc *proc,
 				return -EFAULT;
 			ptr += sizeof(binder_uintptr_t);
 
-			buffer = binder_alloc_buffer_lookup(&proc->alloc,
-							    data_ptr);
+			buffer = binder_alloc_prepare_to_free(&proc->alloc,
+							      data_ptr);
 			if (buffer == NULL) {
 				binder_user_error("%d:%d BC_FREE_BUFFER u%016llx no match\n",
 					proc->pid, thread->pid, (u64)data_ptr);
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index a0af1419cc79..2a2e41b13de5 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -116,7 +116,7 @@ static void binder_insert_allocated_buffer_locked(
 	rb_insert_color(&new_buffer->rb_node, &alloc->allocated_buffers);
 }
 
-static struct binder_buffer *binder_alloc_buffer_lookup_locked(
+static struct binder_buffer *binder_alloc_prepare_to_free_locked(
 		struct binder_alloc *alloc,
 		uintptr_t user_ptr)
 {
@@ -135,8 +135,19 @@ static struct binder_buffer *binder_alloc_buffer_lookup_locked(
 			n = n->rb_left;
 		else if (kern_ptr > buffer)
 			n = n->rb_right;
-		else
+		else {
+			/*
+			 * Guard against user threads attempting to
+			 * free the buffer twice
+			 */
+			if (buffer->free_in_progress) {
+				pr_err("%d:%d FREE_BUFFER u%016llx user freed buffer twice\n",
+				       alloc->pid, current->pid, (u64)user_ptr);
+				return NULL;
+			}
+			buffer->free_in_progress = 1;
 			return buffer;
+		}
 	}
 	return NULL;
 }
@@ -152,13 +163,13 @@ static struct binder_buffer *binder_alloc_buffer_lookup_locked(
  *
  * Return:	Pointer to buffer or NULL
  */
-struct binder_buffer *binder_alloc_buffer_lookup(struct binder_alloc *alloc,
-						 uintptr_t user_ptr)
+struct binder_buffer *binder_alloc_prepare_to_free(struct binder_alloc *alloc,
+						   uintptr_t user_ptr)
 {
 	struct binder_buffer *buffer;
 
 	mutex_lock(&alloc->mutex);
-	buffer = binder_alloc_buffer_lookup_locked(alloc, user_ptr);
+	buffer = binder_alloc_prepare_to_free_locked(alloc, user_ptr);
 	mutex_unlock(&alloc->mutex);
 	return buffer;
 }
@@ -358,6 +369,7 @@ struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc,
 
 	rb_erase(best_fit, &alloc->free_buffers);
 	buffer->free = 0;
+	buffer->free_in_progress = 0;
 	binder_insert_allocated_buffer_locked(alloc, buffer);
 	if (buffer_size != size) {
 		struct binder_buffer *new_buffer = (void *)buffer->data + size;
diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
index 721c511431f9..088e4ffc6230 100644
--- a/drivers/android/binder_alloc.h
+++ b/drivers/android/binder_alloc.h
@@ -48,7 +48,8 @@ struct binder_buffer {
 	unsigned free:1;
 	unsigned allow_user_free:1;
 	unsigned async_transaction:1;
-	unsigned debug_id:29;
+	unsigned free_in_progress:1;
+	unsigned debug_id:28;
 
 	struct binder_transaction *transaction;
 
@@ -109,8 +110,8 @@ extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
 extern void binder_alloc_init(struct binder_alloc *alloc);
 extern void binder_alloc_vma_close(struct binder_alloc *alloc);
 extern struct binder_buffer *
-binder_alloc_buffer_lookup(struct binder_alloc *alloc,
-			   uintptr_t user_ptr);
+binder_alloc_prepare_to_free(struct binder_alloc *alloc,
+			     uintptr_t user_ptr);
 extern void binder_alloc_free_buf(struct binder_alloc *alloc,
 				  struct binder_buffer *buffer);
 extern int binder_alloc_mmap_handler(struct binder_alloc *alloc,
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 18/37] binder: add more debug info when allocation fails.
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (16 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 17/37] binder: protect against two threads freeing buffer Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 19/37] binder: use atomic for transaction_log index Todd Kjos
                   ` (19 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos; +Cc: Martijn Coenen, Siqi Lin

From: Martijn Coenen <maco@android.com>

Display information about allocated/free space whenever
binder buffer allocation fails on synchronous
transactions.

Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Siqi Lin <siqilin@google.com>
---
 drivers/android/binder_alloc.c | 31 ++++++++++++++++++++++++++++++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 2a2e41b13de5..40f31df60580 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -337,8 +337,36 @@ struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc,
 		}
 	}
 	if (best_fit == NULL) {
+		size_t allocated_buffers = 0;
+		size_t largest_alloc_size = 0;
+		size_t total_alloc_size = 0;
+		size_t free_buffers = 0;
+		size_t largest_free_size = 0;
+		size_t total_free_size = 0;
+
+		for (n = rb_first(&alloc->allocated_buffers); n != NULL;
+		     n = rb_next(n)) {
+			buffer = rb_entry(n, struct binder_buffer, rb_node);
+			buffer_size = binder_alloc_buffer_size(alloc, buffer);
+			allocated_buffers++;
+			total_alloc_size += buffer_size;
+			if (buffer_size > largest_alloc_size)
+				largest_alloc_size = buffer_size;
+		}
+		for (n = rb_first(&alloc->free_buffers); n != NULL;
+		     n = rb_next(n)) {
+			buffer = rb_entry(n, struct binder_buffer, rb_node);
+			buffer_size = binder_alloc_buffer_size(alloc, buffer);
+			free_buffers++;
+			total_free_size += buffer_size;
+			if (buffer_size > largest_free_size)
+				largest_free_size = buffer_size;
+		}
 		pr_err("%d: binder_alloc_buf size %zd failed, no address space\n",
 			alloc->pid, size);
+		pr_err("allocated: %zd (num: %zd largest: %zd), free: %zd (num: %zd largest: %zd)\n",
+		       total_alloc_size, allocated_buffers, largest_alloc_size,
+		       total_free_size, free_buffers, largest_free_size);
 		return ERR_PTR(-ENOSPC);
 	}
 	if (n == NULL) {
@@ -698,9 +726,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
 static void print_binder_buffer(struct seq_file *m, const char *prefix,
 				struct binder_buffer *buffer)
 {
-	seq_printf(m, "%s %d: %pK size %zd:%zd %s\n",
+	seq_printf(m, "%s %d: %pK size %zd:%zd:%zd %s\n",
 		   prefix, buffer->debug_id, buffer->data,
 		   buffer->data_size, buffer->offsets_size,
+		   buffer->extra_buffers_size,
 		   buffer->transaction ? "active" : "delivered");
 }
 
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 19/37] binder: use atomic for transaction_log index
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (17 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 18/37] binder: add more debug info when allocation fails Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 20/37] binder: refactor binder_pop_transaction Todd Kjos
                   ` (18 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

The log->next index for the transaction log was
not protected when incremented. This led to a
case where log->next++ resulted in an index
larger than ARRAY_SIZE(log->entry) and eventually
a bad access to memory.

Fixed by making the log index an atomic64 and
converting to an array by using "% ARRAY_SIZE(log->entry)"

Also added "complete" field to the log entry which is
written last to tell the print code whether the
entry is complete

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 75 +++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 59 insertions(+), 16 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index a1912a22c89c..cb78a4e6872d 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -187,6 +187,7 @@ static inline void binder_stats_created(enum binder_stat_types type)
 
 struct binder_transaction_log_entry {
 	int debug_id;
+	int debug_id_done;
 	int call_type;
 	int from_proc;
 	int from_thread;
@@ -202,8 +203,8 @@ struct binder_transaction_log_entry {
 	const char *context_name;
 };
 struct binder_transaction_log {
-	int next;
-	int full;
+	atomic_t cur;
+	bool full;
 	struct binder_transaction_log_entry entry[32];
 };
 static struct binder_transaction_log binder_transaction_log;
@@ -213,14 +214,19 @@ static struct binder_transaction_log_entry *binder_transaction_log_add(
 	struct binder_transaction_log *log)
 {
 	struct binder_transaction_log_entry *e;
+	unsigned int cur = atomic_inc_return(&log->cur);
 
-	e = &log->entry[log->next];
-	memset(e, 0, sizeof(*e));
-	log->next++;
-	if (log->next == ARRAY_SIZE(log->entry)) {
-		log->next = 0;
+	if (cur >= ARRAY_SIZE(log->entry))
 		log->full = 1;
-	}
+	e = &log->entry[cur % ARRAY_SIZE(log->entry)];
+	WRITE_ONCE(e->debug_id_done, 0);
+	/*
+	 * write-barrier to synchronize access to e->debug_id_done.
+	 * We make sure the initialized 0 value is seen before
+	 * memset() other fields are zeroed by memset.
+	 */
+	smp_wmb();
+	memset(e, 0, sizeof(*e));
 	return e;
 }
 
@@ -1400,8 +1406,10 @@ static void binder_transaction(struct binder_proc *proc,
 	struct binder_buffer_object *last_fixup_obj = NULL;
 	binder_size_t last_fixup_min_off = 0;
 	struct binder_context *context = proc->context;
+	int t_debug_id = atomic_inc_return(&binder_last_id);
 
 	e = binder_transaction_log_add(&binder_transaction_log);
+	e->debug_id = t_debug_id;
 	e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
 	e->from_proc = proc->pid;
 	e->from_thread = thread->pid;
@@ -1545,8 +1553,7 @@ static void binder_transaction(struct binder_proc *proc,
 	}
 	binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);
 
-	t->debug_id = atomic_inc_return(&binder_last_id);
-	e->debug_id = t->debug_id;
+	t->debug_id = t_debug_id;
 
 	if (reply)
 		binder_debug(BINDER_DEBUG_TRANSACTION,
@@ -1821,6 +1828,12 @@ static void binder_transaction(struct binder_proc *proc,
 		else
 			wake_up_interruptible(target_wait);
 	}
+	/*
+	 * write barrier to synchronize with initialization
+	 * of log entry
+	 */
+	smp_wmb();
+	WRITE_ONCE(e->debug_id_done, t_debug_id);
 	return;
 
 err_translate_failed:
@@ -1858,6 +1871,13 @@ static void binder_transaction(struct binder_proc *proc,
 		e->return_error_line = return_error_line;
 		fe = binder_transaction_log_add(&binder_transaction_log_failed);
 		*fe = *e;
+		/*
+		 * write barrier to synchronize with initialization
+		 * of log entry
+		 */
+		smp_wmb();
+		WRITE_ONCE(e->debug_id_done, t_debug_id);
+		WRITE_ONCE(fe->debug_id_done, t_debug_id);
 	}
 
 	BUG_ON(thread->return_error != BR_OK);
@@ -3718,27 +3738,47 @@ static int binder_proc_show(struct seq_file *m, void *unused)
 static void print_binder_transaction_log_entry(struct seq_file *m,
 					struct binder_transaction_log_entry *e)
 {
+	int debug_id = READ_ONCE(e->debug_id_done);
+	/*
+	 * read barrier to guarantee debug_id_done read before
+	 * we print the log values
+	 */
+	smp_rmb();
 	seq_printf(m,
-		   "%d: %s from %d:%d to %d:%d context %s node %d handle %d size %d:%d ret %d/%d l=%d\n",
+		   "%d: %s from %d:%d to %d:%d context %s node %d handle %d size %d:%d ret %d/%d l=%d",
 		   e->debug_id, (e->call_type == 2) ? "reply" :
 		   ((e->call_type == 1) ? "async" : "call "), e->from_proc,
 		   e->from_thread, e->to_proc, e->to_thread, e->context_name,
 		   e->to_node, e->target_handle, e->data_size, e->offsets_size,
 		   e->return_error, e->return_error_param,
 		   e->return_error_line);
+	/*
+	 * read-barrier to guarantee read of debug_id_done after
+	 * done printing the fields of the entry
+	 */
+	smp_rmb();
+	seq_printf(m, debug_id && debug_id == READ_ONCE(e->debug_id_done) ?
+			"\n" : " (incomplete)\n");
 }
 
 static int binder_transaction_log_show(struct seq_file *m, void *unused)
 {
 	struct binder_transaction_log *log = m->private;
+	unsigned int log_cur = atomic_read(&log->cur);
+	unsigned int count;
+	unsigned int cur;
 	int i;
 
-	if (log->full) {
-		for (i = log->next; i < ARRAY_SIZE(log->entry); i++)
-			print_binder_transaction_log_entry(m, &log->entry[i]);
+	count = log_cur + 1;
+	cur = count < ARRAY_SIZE(log->entry) && !log->full ?
+		0 : count % ARRAY_SIZE(log->entry);
+	if (count > ARRAY_SIZE(log->entry) || log->full)
+		count = ARRAY_SIZE(log->entry);
+	for (i = 0; i < count; i++) {
+		unsigned int index = cur++ % ARRAY_SIZE(log->entry);
+
+		print_binder_transaction_log_entry(m, &log->entry[index]);
 	}
-	for (i = 0; i < log->next; i++)
-		print_binder_transaction_log_entry(m, &log->entry[i]);
 	return 0;
 }
 
@@ -3793,6 +3833,9 @@ static int __init binder_init(void)
 	struct binder_device *device;
 	struct hlist_node *tmp;
 
+	atomic_set(&binder_transaction_log.cur, ~0U);
+	atomic_set(&binder_transaction_log_failed.cur, ~0U);
+
 	binder_debugfs_dir_entry_root = debugfs_create_dir("binder", NULL);
 	if (binder_debugfs_dir_entry_root)
 		binder_debugfs_dir_entry_proc = debugfs_create_dir("proc",
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 20/37] binder: refactor binder_pop_transaction
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (18 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 19/37] binder: use atomic for transaction_log index Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 21/37] binder: guarantee txn complete / errors delivered in-order Todd Kjos
                   ` (17 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

binder_pop_transaction needs to be split into 2 pieces to
to allow the proc lock to be held on entry to dequeue the
transaction stack, but no lock when kfree'ing the transaction.

Split into binder_pop_transaction_locked and binder_free_transaction
(the actual locks are still to be added).

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index cb78a4e6872d..d2fcf3cc29a6 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -768,14 +768,16 @@ static int binder_dec_ref(struct binder_ref *ref, int strong)
 static void binder_pop_transaction(struct binder_thread *target_thread,
 				   struct binder_transaction *t)
 {
-	if (target_thread) {
-		BUG_ON(target_thread->transaction_stack != t);
-		BUG_ON(target_thread->transaction_stack->from != target_thread);
-		target_thread->transaction_stack =
-			target_thread->transaction_stack->from_parent;
-		t->from = NULL;
-	}
-	t->need_reply = 0;
+	BUG_ON(!target_thread);
+	BUG_ON(target_thread->transaction_stack != t);
+	BUG_ON(target_thread->transaction_stack->from != target_thread);
+	target_thread->transaction_stack =
+		target_thread->transaction_stack->from_parent;
+	t->from = NULL;
+}
+
+static void binder_free_transaction(struct binder_transaction *t)
+{
 	if (t->buffer)
 		t->buffer->transaction = NULL;
 	kfree(t);
@@ -808,6 +810,7 @@ static void binder_send_failed_reply(struct binder_transaction *t,
 				binder_pop_transaction(target_thread, t);
 				target_thread->return_error = error_code;
 				wake_up_interruptible(&target_thread->wait);
+				binder_free_transaction(t);
 			} else {
 				pr_err("reply failed, target thread, %d:%d, has error code %d already\n",
 					target_thread->proc->pid,
@@ -822,7 +825,7 @@ static void binder_send_failed_reply(struct binder_transaction *t,
 			     "send failed reply for transaction %d, target dead\n",
 			     t->debug_id);
 
-		binder_pop_transaction(target_thread, t);
+		binder_free_transaction(t);
 		if (next == NULL) {
 			binder_debug(BINDER_DEBUG_DEAD_BINDER,
 				     "reply failed, no target thread at root\n");
@@ -1806,6 +1809,7 @@ static void binder_transaction(struct binder_proc *proc,
 	if (reply) {
 		BUG_ON(t->buffer->async_transaction != 0);
 		binder_pop_transaction(target_thread, in_reply_to);
+		binder_free_transaction(in_reply_to);
 	} else if (!(t->flags & TF_ONE_WAY)) {
 		BUG_ON(t->buffer->async_transaction != 0);
 		t->need_reply = 1;
@@ -2635,9 +2639,7 @@ static int binder_thread_read(struct binder_proc *proc,
 			t->to_thread = thread;
 			thread->transaction_stack = t;
 		} else {
-			t->buffer->transaction = NULL;
-			kfree(t);
-			binder_stats_deleted(BINDER_STAT_TRANSACTION);
+			binder_free_transaction(t);
 		}
 		break;
 	}
@@ -2680,9 +2682,7 @@ static void binder_release_work(struct list_head *list)
 				binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
 					"undelivered transaction %d\n",
 					t->debug_id);
-				t->buffer->transaction = NULL;
-				kfree(t);
-				binder_stats_deleted(BINDER_STAT_TRANSACTION);
+				binder_free_transaction(t);
 			}
 		} break;
 		case BINDER_WORK_TRANSACTION_COMPLETE: {
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 21/37] binder: guarantee txn complete / errors delivered in-order
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (19 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 20/37] binder: refactor binder_pop_transaction Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 22/37] binder: make sure target_node has strong ref Todd Kjos
                   ` (16 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Since errors are tracked in the return_error/return_error2
fields of the binder_thread object and BR_TRANSACTION_COMPLETEs
can be tracked either in those fields or via the thread todo
work list, it is possible for errors to be reported ahead
of the associated txn complete.

Use the thread todo work list for errors to guarantee
order. Also changed binder_send_failed_reply to pop
the transaction even if it failed to send a reply.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 127 +++++++++++++++++++++++++++--------------------
 1 file changed, 73 insertions(+), 54 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index d2fcf3cc29a6..84a57dd7b973 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -249,6 +249,7 @@ struct binder_work {
 	enum {
 		BINDER_WORK_TRANSACTION = 1,
 		BINDER_WORK_TRANSACTION_COMPLETE,
+		BINDER_WORK_RETURN_ERROR,
 		BINDER_WORK_NODE,
 		BINDER_WORK_DEAD_BINDER,
 		BINDER_WORK_DEAD_BINDER_AND_CLEAR,
@@ -256,6 +257,11 @@ struct binder_work {
 	} type;
 };
 
+struct binder_error {
+	struct binder_work work;
+	uint32_t cmd;
+};
+
 struct binder_node {
 	int debug_id;
 	struct binder_work work;
@@ -350,10 +356,8 @@ struct binder_thread {
 	bool looper_need_return; /* can be written by other thread */
 	struct binder_transaction *transaction_stack;
 	struct list_head todo;
-	uint32_t return_error; /* Write failed, return error code in read buf */
-	uint32_t return_error2; /* Write failed, return error code in read */
-		/* buffer. Used when sending a reply to a dead process that */
-		/* we are also waiting on */
+	struct binder_error return_error;
+	struct binder_error reply_error;
 	wait_queue_head_t wait;
 	struct binder_stats stats;
 };
@@ -794,29 +798,24 @@ static void binder_send_failed_reply(struct binder_transaction *t,
 	while (1) {
 		target_thread = t->from;
 		if (target_thread) {
-			if (target_thread->return_error != BR_OK &&
-			   target_thread->return_error2 == BR_OK) {
-				target_thread->return_error2 =
-					target_thread->return_error;
-				target_thread->return_error = BR_OK;
-			}
-			if (target_thread->return_error == BR_OK) {
-				binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
-					     "send failed reply for transaction %d to %d:%d\n",
-					      t->debug_id,
-					      target_thread->proc->pid,
-					      target_thread->pid);
-
-				binder_pop_transaction(target_thread, t);
-				target_thread->return_error = error_code;
+			binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
+				     "send failed reply for transaction %d to %d:%d\n",
+				      t->debug_id,
+				      target_thread->proc->pid,
+				      target_thread->pid);
+
+			binder_pop_transaction(target_thread, t);
+			if (target_thread->reply_error.cmd == BR_OK) {
+				target_thread->reply_error.cmd = error_code;
+				list_add_tail(
+					&target_thread->reply_error.work.entry,
+					&target_thread->todo);
 				wake_up_interruptible(&target_thread->wait);
-				binder_free_transaction(t);
 			} else {
-				pr_err("reply failed, target thread, %d:%d, has error code %d already\n",
-					target_thread->proc->pid,
-					target_thread->pid,
-					target_thread->return_error);
+				WARN(1, "Unexpected reply error: %u\n",
+						target_thread->reply_error.cmd);
 			}
+			binder_free_transaction(t);
 			return;
 		}
 		next = t->from_parent;
@@ -1884,12 +1883,17 @@ static void binder_transaction(struct binder_proc *proc,
 		WRITE_ONCE(fe->debug_id_done, t_debug_id);
 	}
 
-	BUG_ON(thread->return_error != BR_OK);
+	BUG_ON(thread->return_error.cmd != BR_OK);
 	if (in_reply_to) {
-		thread->return_error = BR_TRANSACTION_COMPLETE;
+		thread->return_error.cmd = BR_TRANSACTION_COMPLETE;
+		list_add_tail(&thread->return_error.work.entry,
+			      &thread->todo);
 		binder_send_failed_reply(in_reply_to, return_error);
-	} else
-		thread->return_error = return_error;
+	} else {
+		thread->return_error.cmd = return_error;
+		list_add_tail(&thread->return_error.work.entry,
+			      &thread->todo);
+	}
 }
 
 static int binder_thread_write(struct binder_proc *proc,
@@ -1903,7 +1907,7 @@ static int binder_thread_write(struct binder_proc *proc,
 	void __user *ptr = buffer + *consumed;
 	void __user *end = buffer + size;
 
-	while (ptr < end && thread->return_error == BR_OK) {
+	while (ptr < end && thread->return_error.cmd == BR_OK) {
 		if (get_user(cmd, (uint32_t __user *)ptr))
 			return -EFAULT;
 		ptr += sizeof(uint32_t);
@@ -2183,7 +2187,12 @@ static int binder_thread_write(struct binder_proc *proc,
 				}
 				death = kzalloc(sizeof(*death), GFP_KERNEL);
 				if (death == NULL) {
-					thread->return_error = BR_ERROR;
+					WARN_ON(thread->return_error.cmd !=
+						BR_OK);
+					thread->return_error.cmd = BR_ERROR;
+					list_add_tail(
+					    &thread->return_error.work.entry,
+					    &thread->todo);
 					binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
 						     "%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n",
 						     proc->pid, thread->pid);
@@ -2299,8 +2308,7 @@ static int binder_has_proc_work(struct binder_proc *proc,
 
 static int binder_has_thread_work(struct binder_thread *thread)
 {
-	return !list_empty(&thread->todo) || thread->return_error != BR_OK ||
-		thread->looper_need_return;
+	return !list_empty(&thread->todo) || thread->looper_need_return;
 }
 
 static int binder_put_node_cmd(struct binder_proc *proc,
@@ -2356,25 +2364,6 @@ static int binder_thread_read(struct binder_proc *proc,
 	wait_for_proc_work = thread->transaction_stack == NULL &&
 				list_empty(&thread->todo);
 
-	if (thread->return_error != BR_OK && ptr < end) {
-		if (thread->return_error2 != BR_OK) {
-			if (put_user(thread->return_error2, (uint32_t __user *)ptr))
-				return -EFAULT;
-			ptr += sizeof(uint32_t);
-			binder_stat_br(proc, thread, thread->return_error2);
-			if (ptr == end)
-				goto done;
-			thread->return_error2 = BR_OK;
-		}
-		if (put_user(thread->return_error, (uint32_t __user *)ptr))
-			return -EFAULT;
-		ptr += sizeof(uint32_t);
-		binder_stat_br(proc, thread, thread->return_error);
-		thread->return_error = BR_OK;
-		goto done;
-	}
-
-
 	thread->looper |= BINDER_LOOPER_STATE_WAITING;
 	if (wait_for_proc_work)
 		proc->ready_threads++;
@@ -2441,6 +2430,19 @@ static int binder_thread_read(struct binder_proc *proc,
 		case BINDER_WORK_TRANSACTION: {
 			t = container_of(w, struct binder_transaction, work);
 		} break;
+		case BINDER_WORK_RETURN_ERROR: {
+			struct binder_error *e = container_of(
+					w, struct binder_error, work);
+
+			WARN_ON(e->cmd == BR_OK);
+			if (put_user(e->cmd, (uint32_t __user *)ptr))
+				return -EFAULT;
+			e->cmd = BR_OK;
+			ptr += sizeof(uint32_t);
+
+			binder_stat_br(proc, thread, cmd);
+			list_del(&w->entry);
+		} break;
 		case BINDER_WORK_TRANSACTION_COMPLETE: {
 			cmd = BR_TRANSACTION_COMPLETE;
 			if (put_user(cmd, (uint32_t __user *)ptr))
@@ -2685,6 +2687,14 @@ static void binder_release_work(struct list_head *list)
 				binder_free_transaction(t);
 			}
 		} break;
+		case BINDER_WORK_RETURN_ERROR: {
+			struct binder_error *e = container_of(
+					w, struct binder_error, work);
+
+			binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
+				"undelivered TRANSACTION_ERROR: %u\n",
+				e->cmd);
+		} break;
 		case BINDER_WORK_TRANSACTION_COMPLETE: {
 			binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
 				"undelivered TRANSACTION_COMPLETE\n");
@@ -2740,8 +2750,10 @@ static struct binder_thread *binder_get_thread(struct binder_proc *proc)
 		rb_link_node(&thread->rb_node, parent, p);
 		rb_insert_color(&thread->rb_node, &proc->threads);
 		thread->looper_need_return = true;
-		thread->return_error = BR_OK;
-		thread->return_error2 = BR_OK;
+		thread->return_error.work.type = BINDER_WORK_RETURN_ERROR;
+		thread->return_error.cmd = BR_OK;
+		thread->reply_error.work.type = BINDER_WORK_RETURN_ERROR;
+		thread->reply_error.cmd = BR_OK;
 	}
 	return thread;
 }
@@ -2799,7 +2811,7 @@ static unsigned int binder_poll(struct file *filp,
 	thread = binder_get_thread(proc);
 
 	wait_for_proc_work = thread->transaction_stack == NULL &&
-		list_empty(&thread->todo) && thread->return_error == BR_OK;
+		list_empty(&thread->todo);
 
 	binder_unlock(__func__);
 
@@ -3378,6 +3390,13 @@ static void print_binder_work(struct seq_file *m, const char *prefix,
 		t = container_of(w, struct binder_transaction, work);
 		print_binder_transaction(m, transaction_prefix, t);
 		break;
+	case BINDER_WORK_RETURN_ERROR: {
+		struct binder_error *e = container_of(
+				w, struct binder_error, work);
+
+		seq_printf(m, "%stransaction error: %u\n",
+			   prefix, e->cmd);
+	} break;
 	case BINDER_WORK_TRANSACTION_COMPLETE:
 		seq_printf(m, "%stransaction complete\n", prefix);
 		break;
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 22/37] binder: make sure target_node has strong ref
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (20 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 21/37] binder: guarantee txn complete / errors delivered in-order Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 23/37] binder: make sure accesses to proc/thread are safe Todd Kjos
                   ` (15 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

When initiating a transaction, the target_node must
have a strong ref on it. Then we take a second
strong ref to make sure the node survives until the
transaction is complete.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 84a57dd7b973..fb79f40111eb 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -1469,8 +1469,19 @@ static void binder_transaction(struct binder_proc *proc,
 		if (tr->target.handle) {
 			struct binder_ref *ref;
 
+			/*
+			 * There must already be a strong ref
+			 * on this node. If so, do a strong
+			 * increment on the node to ensure it
+			 * stays alive until the transaction is
+			 * done.
+			 */
 			ref = binder_get_ref(proc, tr->target.handle, true);
-			if (ref == NULL) {
+			if (ref) {
+				binder_inc_node(ref->node, 1, 0, NULL);
+				target_node = ref->node;
+			}
+			if (target_node == NULL) {
 				binder_user_error("%d:%d got transaction to invalid handle\n",
 					proc->pid, thread->pid);
 				return_error = BR_FAILED_REPLY;
@@ -1478,7 +1489,6 @@ static void binder_transaction(struct binder_proc *proc,
 				return_error_line = __LINE__;
 				goto err_invalid_target_handle;
 			}
-			target_node = ref->node;
 		} else {
 			mutex_lock(&context->context_mgr_node_lock);
 			target_node = context->binder_context_mgr_node;
@@ -1488,6 +1498,7 @@ static void binder_transaction(struct binder_proc *proc,
 				return_error_line = __LINE__;
 				goto err_no_context_mgr_node;
 			}
+			binder_inc_node(target_node, 1, 0, NULL);
 			mutex_unlock(&context->context_mgr_node_lock);
 		}
 		e->to_node = target_node->debug_id;
@@ -1608,9 +1619,6 @@ static void binder_transaction(struct binder_proc *proc,
 	t->buffer->transaction = t;
 	t->buffer->target_node = target_node;
 	trace_binder_transaction_alloc_buf(t->buffer);
-	if (target_node)
-		binder_inc_node(target_node, 1, 0, NULL);
-
 	off_start = (binder_size_t *)(t->buffer->data +
 				      ALIGN(tr->data_size, sizeof(void *)));
 	offp = off_start;
@@ -1846,6 +1854,7 @@ static void binder_transaction(struct binder_proc *proc,
 err_copy_data_failed:
 	trace_binder_transaction_failed_buffer_release(t->buffer);
 	binder_transaction_buffer_release(target_proc, t->buffer, offp);
+	target_node = NULL;
 	t->buffer->transaction = NULL;
 	binder_alloc_free_buf(&target_proc->alloc, t->buffer);
 err_binder_alloc_buf_failed:
@@ -1860,6 +1869,9 @@ static void binder_transaction(struct binder_proc *proc,
 err_dead_binder:
 err_invalid_target_handle:
 err_no_context_mgr_node:
+	if (target_node)
+		binder_dec_node(target_node, 1, 0);
+
 	binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
 		     "%d:%d transaction failed %d/%d, size %lld-%lld line %d\n",
 		     proc->pid, thread->pid, return_error, return_error_param,
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 23/37] binder: make sure accesses to proc/thread are safe
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (21 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 22/37] binder: make sure target_node has strong ref Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 24/37] binder: refactor binder ref inc/dec for thread safety Todd Kjos
                   ` (14 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

binder_thread and binder_proc may be accessed by other
threads when processing transaction. Therefore they
must be prevented from being freed while a transaction
is in progress that references them.

This is done by introducing a temporary reference
counter for threads and procs that indicates that the
object is in use and must not be freed. binder_thread_dec_tmpref()
and binder_proc_dec_tmpref() are used to decrement
the temporary reference.

It is safe to free a binder_thread if there
is no reference and it has been released
(indicated by thread->is_dead).

It is safe to free a binder_proc if it has no
remaining threads and no reference.

A spinlock is added to the binder_transaction
to safely access and set references for t->from
and for debug code to safely access t->to_thread
and t->to_proc.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 233 +++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 206 insertions(+), 27 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index fb79f40111eb..ca7d866b89e8 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -325,6 +325,7 @@ struct binder_proc {
 	struct files_struct *files;
 	struct hlist_node deferred_work_node;
 	int deferred_work;
+	bool is_dead;
 
 	struct list_head todo;
 	wait_queue_head_t wait;
@@ -334,6 +335,7 @@ struct binder_proc {
 	int requested_threads;
 	int requested_threads_started;
 	int ready_threads;
+	int tmp_ref;
 	long default_priority;
 	struct dentry *debugfs_entry;
 	struct binder_alloc alloc;
@@ -360,6 +362,8 @@ struct binder_thread {
 	struct binder_error reply_error;
 	wait_queue_head_t wait;
 	struct binder_stats stats;
+	atomic_t tmp_ref;
+	bool is_dead;
 };
 
 struct binder_transaction {
@@ -379,10 +383,19 @@ struct binder_transaction {
 	long	priority;
 	long	saved_priority;
 	kuid_t	sender_euid;
+	/**
+	 * @lock:  protects @from, @to_proc, and @to_thread
+	 *
+	 * @from, @to_proc, and @to_thread can be set to NULL
+	 * during thread teardown
+	 */
+	spinlock_t lock;
 };
 
 static void
 binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
+static void binder_free_thread(struct binder_thread *thread);
+static void binder_free_proc(struct binder_proc *proc);
 
 static int task_get_unused_fd_flags(struct binder_proc *proc, int flags)
 {
@@ -780,6 +793,79 @@ static void binder_pop_transaction(struct binder_thread *target_thread,
 	t->from = NULL;
 }
 
+/**
+ * binder_thread_dec_tmpref() - decrement thread->tmp_ref
+ * @thread:	thread to decrement
+ *
+ * A thread needs to be kept alive while being used to create or
+ * handle a transaction. binder_get_txn_from() is used to safely
+ * extract t->from from a binder_transaction and keep the thread
+ * indicated by t->from from being freed. When done with that
+ * binder_thread, this function is called to decrement the
+ * tmp_ref and free if appropriate (thread has been released
+ * and no transaction being processed by the driver)
+ */
+static void binder_thread_dec_tmpref(struct binder_thread *thread)
+{
+	/*
+	 * atomic is used to protect the counter value while
+	 * it cannot reach zero or thread->is_dead is false
+	 *
+	 * TODO: future patch adds locking to ensure that the
+	 * check of tmp_ref and is_dead is done with a lock held
+	 */
+	atomic_dec(&thread->tmp_ref);
+	if (thread->is_dead && !atomic_read(&thread->tmp_ref)) {
+		binder_free_thread(thread);
+		return;
+	}
+}
+
+/**
+ * binder_proc_dec_tmpref() - decrement proc->tmp_ref
+ * @proc:	proc to decrement
+ *
+ * A binder_proc needs to be kept alive while being used to create or
+ * handle a transaction. proc->tmp_ref is incremented when
+ * creating a new transaction or the binder_proc is currently in-use
+ * by threads that are being released. When done with the binder_proc,
+ * this function is called to decrement the counter and free the
+ * proc if appropriate (proc has been released, all threads have
+ * been released and not currenly in-use to process a transaction).
+ */
+static void binder_proc_dec_tmpref(struct binder_proc *proc)
+{
+	proc->tmp_ref--;
+	if (proc->is_dead && RB_EMPTY_ROOT(&proc->threads) &&
+			!proc->tmp_ref) {
+		binder_free_proc(proc);
+		return;
+	}
+}
+
+/**
+ * binder_get_txn_from() - safely extract the "from" thread in transaction
+ * @t:	binder transaction for t->from
+ *
+ * Atomically return the "from" thread and increment the tmp_ref
+ * count for the thread to ensure it stays alive until
+ * binder_thread_dec_tmpref() is called.
+ *
+ * Return: the value of t->from
+ */
+static struct binder_thread *binder_get_txn_from(
+		struct binder_transaction *t)
+{
+	struct binder_thread *from;
+
+	spin_lock(&t->lock);
+	from = t->from;
+	if (from)
+		atomic_inc(&from->tmp_ref);
+	spin_unlock(&t->lock);
+	return from;
+}
+
 static void binder_free_transaction(struct binder_transaction *t)
 {
 	if (t->buffer)
@@ -796,7 +882,7 @@ static void binder_send_failed_reply(struct binder_transaction *t,
 
 	BUG_ON(t->flags & TF_ONE_WAY);
 	while (1) {
-		target_thread = t->from;
+		target_thread = binder_get_txn_from(t);
 		if (target_thread) {
 			binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
 				     "send failed reply for transaction %d to %d:%d\n",
@@ -815,6 +901,7 @@ static void binder_send_failed_reply(struct binder_transaction *t,
 				WARN(1, "Unexpected reply error: %u\n",
 						target_thread->reply_error.cmd);
 			}
+			binder_thread_dec_tmpref(target_thread);
 			binder_free_transaction(t);
 			return;
 		}
@@ -1395,7 +1482,7 @@ static void binder_transaction(struct binder_proc *proc,
 	binder_size_t *offp, *off_end, *off_start;
 	binder_size_t off_min;
 	u8 *sg_bufp, *sg_buf_end;
-	struct binder_proc *target_proc;
+	struct binder_proc *target_proc = NULL;
 	struct binder_thread *target_thread = NULL;
 	struct binder_node *target_node = NULL;
 	struct list_head *target_list;
@@ -1432,12 +1519,14 @@ static void binder_transaction(struct binder_proc *proc,
 		}
 		binder_set_nice(in_reply_to->saved_priority);
 		if (in_reply_to->to_thread != thread) {
+			spin_lock(&in_reply_to->lock);
 			binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",
 				proc->pid, thread->pid, in_reply_to->debug_id,
 				in_reply_to->to_proc ?
 				in_reply_to->to_proc->pid : 0,
 				in_reply_to->to_thread ?
 				in_reply_to->to_thread->pid : 0);
+			spin_unlock(&in_reply_to->lock);
 			return_error = BR_FAILED_REPLY;
 			return_error_param = -EPROTO;
 			return_error_line = __LINE__;
@@ -1445,7 +1534,7 @@ static void binder_transaction(struct binder_proc *proc,
 			goto err_bad_call_stack;
 		}
 		thread->transaction_stack = in_reply_to->to_parent;
-		target_thread = in_reply_to->from;
+		target_thread = binder_get_txn_from(in_reply_to);
 		if (target_thread == NULL) {
 			return_error = BR_DEAD_REPLY;
 			return_error_line = __LINE__;
@@ -1465,6 +1554,7 @@ static void binder_transaction(struct binder_proc *proc,
 			goto err_dead_binder;
 		}
 		target_proc = target_thread->proc;
+		target_proc->tmp_ref++;
 	} else {
 		if (tr->target.handle) {
 			struct binder_ref *ref;
@@ -1508,6 +1598,7 @@ static void binder_transaction(struct binder_proc *proc,
 			return_error_line = __LINE__;
 			goto err_dead_binder;
 		}
+		target_proc->tmp_ref++;
 		if (security_binder_transaction(proc->tsk,
 						target_proc->tsk) < 0) {
 			return_error = BR_FAILED_REPLY;
@@ -1520,19 +1611,30 @@ static void binder_transaction(struct binder_proc *proc,
 
 			tmp = thread->transaction_stack;
 			if (tmp->to_thread != thread) {
+				spin_lock(&tmp->lock);
 				binder_user_error("%d:%d got new transaction with bad transaction stack, transaction %d has target %d:%d\n",
 					proc->pid, thread->pid, tmp->debug_id,
 					tmp->to_proc ? tmp->to_proc->pid : 0,
 					tmp->to_thread ?
 					tmp->to_thread->pid : 0);
+				spin_unlock(&tmp->lock);
 				return_error = BR_FAILED_REPLY;
 				return_error_param = -EPROTO;
 				return_error_line = __LINE__;
 				goto err_bad_call_stack;
 			}
 			while (tmp) {
-				if (tmp->from && tmp->from->proc == target_proc)
-					target_thread = tmp->from;
+				struct binder_thread *from;
+
+				spin_lock(&tmp->lock);
+				from = tmp->from;
+				if (from && from->proc == target_proc) {
+					atomic_inc(&from->tmp_ref);
+					target_thread = from;
+					spin_unlock(&tmp->lock);
+					break;
+				}
+				spin_unlock(&tmp->lock);
 				tmp = tmp->from_parent;
 			}
 		}
@@ -1556,6 +1658,7 @@ static void binder_transaction(struct binder_proc *proc,
 		goto err_alloc_t_failed;
 	}
 	binder_stats_created(BINDER_STAT_TRANSACTION);
+	spin_lock_init(&t->lock);
 
 	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
 	if (tcomplete == NULL) {
@@ -1814,6 +1917,8 @@ static void binder_transaction(struct binder_proc *proc,
 	list_add_tail(&tcomplete->entry, &thread->todo);
 
 	if (reply) {
+		if (target_thread->is_dead)
+			goto err_dead_proc_or_thread;
 		BUG_ON(t->buffer->async_transaction != 0);
 		binder_pop_transaction(target_thread, in_reply_to);
 		binder_free_transaction(in_reply_to);
@@ -1822,6 +1927,11 @@ static void binder_transaction(struct binder_proc *proc,
 		t->need_reply = 1;
 		t->from_parent = thread->transaction_stack;
 		thread->transaction_stack = t;
+		if (target_proc->is_dead ||
+				(target_thread && target_thread->is_dead)) {
+			binder_pop_transaction(thread, t);
+			goto err_dead_proc_or_thread;
+		}
 	} else {
 		BUG_ON(target_node == NULL);
 		BUG_ON(t->buffer->async_transaction != 1);
@@ -1830,6 +1940,9 @@ static void binder_transaction(struct binder_proc *proc,
 			target_wait = NULL;
 		} else
 			target_node->has_async_transaction = 1;
+		if (target_proc->is_dead ||
+				(target_thread && target_thread->is_dead))
+			goto err_dead_proc_or_thread;
 	}
 	t->work.type = BINDER_WORK_TRANSACTION;
 	list_add_tail(&t->work.entry, target_list);
@@ -1839,6 +1952,9 @@ static void binder_transaction(struct binder_proc *proc,
 		else
 			wake_up_interruptible(target_wait);
 	}
+	if (target_thread)
+		binder_thread_dec_tmpref(target_thread);
+	binder_proc_dec_tmpref(target_proc);
 	/*
 	 * write barrier to synchronize with initialization
 	 * of log entry
@@ -1847,6 +1963,9 @@ static void binder_transaction(struct binder_proc *proc,
 	WRITE_ONCE(e->debug_id_done, t_debug_id);
 	return;
 
+err_dead_proc_or_thread:
+	return_error = BR_DEAD_REPLY;
+	return_error_line = __LINE__;
 err_translate_failed:
 err_bad_object_type:
 err_bad_offset:
@@ -1869,6 +1988,10 @@ static void binder_transaction(struct binder_proc *proc,
 err_dead_binder:
 err_invalid_target_handle:
 err_no_context_mgr_node:
+	if (target_thread)
+		binder_thread_dec_tmpref(target_thread);
+	if (target_proc)
+		binder_proc_dec_tmpref(target_proc);
 	if (target_node)
 		binder_dec_node(target_node, 1, 0);
 
@@ -2421,6 +2544,7 @@ static int binder_thread_read(struct binder_proc *proc,
 		struct binder_transaction_data tr;
 		struct binder_work *w;
 		struct binder_transaction *t = NULL;
+		struct binder_thread *t_from;
 
 		if (!list_empty(&thread->todo)) {
 			w = list_first_entry(&thread->todo, struct binder_work,
@@ -2609,8 +2733,9 @@ static int binder_thread_read(struct binder_proc *proc,
 		tr.flags = t->flags;
 		tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
 
-		if (t->from) {
-			struct task_struct *sender = t->from->proc->tsk;
+		t_from = binder_get_txn_from(t);
+		if (t_from) {
+			struct task_struct *sender = t_from->proc->tsk;
 
 			tr.sender_pid = task_tgid_nr_ns(sender,
 							task_active_pid_ns(current));
@@ -2627,11 +2752,17 @@ static int binder_thread_read(struct binder_proc *proc,
 					ALIGN(t->buffer->data_size,
 					    sizeof(void *));
 
-		if (put_user(cmd, (uint32_t __user *)ptr))
+		if (put_user(cmd, (uint32_t __user *)ptr)) {
+			if (t_from)
+				binder_thread_dec_tmpref(t_from);
 			return -EFAULT;
+		}
 		ptr += sizeof(uint32_t);
-		if (copy_to_user(ptr, &tr, sizeof(tr)))
+		if (copy_to_user(ptr, &tr, sizeof(tr))) {
+			if (t_from)
+				binder_thread_dec_tmpref(t_from);
 			return -EFAULT;
+		}
 		ptr += sizeof(tr);
 
 		trace_binder_transaction_received(t);
@@ -2641,11 +2772,13 @@ static int binder_thread_read(struct binder_proc *proc,
 			     proc->pid, thread->pid,
 			     (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
 			     "BR_REPLY",
-			     t->debug_id, t->from ? t->from->proc->pid : 0,
-			     t->from ? t->from->pid : 0, cmd,
+			     t->debug_id, t_from ? t_from->proc->pid : 0,
+			     t_from ? t_from->pid : 0, cmd,
 			     t->buffer->data_size, t->buffer->offsets_size,
 			     (u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets);
 
+		if (t_from)
+			binder_thread_dec_tmpref(t_from);
 		list_del(&t->work.entry);
 		t->buffer->allow_user_free = 1;
 		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
@@ -2757,6 +2890,7 @@ static struct binder_thread *binder_get_thread(struct binder_proc *proc)
 		binder_stats_created(BINDER_STAT_THREAD);
 		thread->proc = proc;
 		thread->pid = current->pid;
+		atomic_set(&thread->tmp_ref, 0);
 		init_waitqueue_head(&thread->wait);
 		INIT_LIST_HEAD(&thread->todo);
 		rb_link_node(&thread->rb_node, parent, p);
@@ -2770,18 +2904,55 @@ static struct binder_thread *binder_get_thread(struct binder_proc *proc)
 	return thread;
 }
 
-static int binder_free_thread(struct binder_proc *proc,
-			      struct binder_thread *thread)
+static void binder_free_proc(struct binder_proc *proc)
+{
+	BUG_ON(!list_empty(&proc->todo));
+	BUG_ON(!list_empty(&proc->delivered_death));
+	binder_alloc_deferred_release(&proc->alloc);
+	put_task_struct(proc->tsk);
+	binder_stats_deleted(BINDER_STAT_PROC);
+	kfree(proc);
+}
+
+static void binder_free_thread(struct binder_thread *thread)
+{
+	BUG_ON(!list_empty(&thread->todo));
+	binder_stats_deleted(BINDER_STAT_THREAD);
+	binder_proc_dec_tmpref(thread->proc);
+	kfree(thread);
+}
+
+static int binder_thread_release(struct binder_proc *proc,
+				 struct binder_thread *thread)
 {
 	struct binder_transaction *t;
 	struct binder_transaction *send_reply = NULL;
 	int active_transactions = 0;
+	struct binder_transaction *last_t = NULL;
 
+	/*
+	 * take a ref on the proc so it survives
+	 * after we remove this thread from proc->threads.
+	 * The corresponding dec is when we actually
+	 * free the thread in binder_free_thread()
+	 */
+	proc->tmp_ref++;
+	/*
+	 * take a ref on this thread to ensure it
+	 * survives while we are releasing it
+	 */
+	atomic_inc(&thread->tmp_ref);
 	rb_erase(&thread->rb_node, &proc->threads);
 	t = thread->transaction_stack;
-	if (t && t->to_thread == thread)
-		send_reply = t;
+	if (t) {
+		spin_lock(&t->lock);
+		if (t->to_thread == thread)
+			send_reply = t;
+	}
+	thread->is_dead = true;
+
 	while (t) {
+		last_t = t;
 		active_transactions++;
 		binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
 			     "release %d:%d transaction %d %s, still active\n",
@@ -2802,12 +2973,15 @@ static int binder_free_thread(struct binder_proc *proc,
 			t = t->from_parent;
 		} else
 			BUG();
+		spin_unlock(&last_t->lock);
+		if (t)
+			spin_lock(&t->lock);
 	}
+
 	if (send_reply)
 		binder_send_failed_reply(send_reply, BR_DEAD_REPLY);
 	binder_release_work(&thread->todo);
-	kfree(thread);
-	binder_stats_deleted(BINDER_STAT_THREAD);
+	binder_thread_dec_tmpref(thread);
 	return active_transactions;
 }
 
@@ -2995,7 +3169,7 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 	case BINDER_THREAD_EXIT:
 		binder_debug(BINDER_DEBUG_THREADS, "%d:%d exit\n",
 			     proc->pid, thread->pid);
-		binder_free_thread(proc, thread);
+		binder_thread_release(proc, thread);
 		thread = NULL;
 		break;
 	case BINDER_VERSION: {
@@ -3265,7 +3439,13 @@ static void binder_deferred_release(struct binder_proc *proc)
 		context->binder_context_mgr_node = NULL;
 	}
 	mutex_unlock(&context->context_mgr_node_lock);
+	/*
+	 * Make sure proc stays alive after we
+	 * remove all the threads
+	 */
+	proc->tmp_ref++;
 
+	proc->is_dead = true;
 	threads = 0;
 	active_transactions = 0;
 	while ((n = rb_first(&proc->threads))) {
@@ -3273,7 +3453,7 @@ static void binder_deferred_release(struct binder_proc *proc)
 
 		thread = rb_entry(n, struct binder_thread, rb_node);
 		threads++;
-		active_transactions += binder_free_thread(proc, thread);
+		active_transactions += binder_thread_release(proc, thread);
 	}
 
 	nodes = 0;
@@ -3299,17 +3479,12 @@ static void binder_deferred_release(struct binder_proc *proc)
 	binder_release_work(&proc->todo);
 	binder_release_work(&proc->delivered_death);
 
-	binder_alloc_deferred_release(&proc->alloc);
-	binder_stats_deleted(BINDER_STAT_PROC);
-
-	put_task_struct(proc->tsk);
-
 	binder_debug(BINDER_DEBUG_OPEN_CLOSE,
 		     "%s: %d threads %d, nodes %d (ref %d), refs %d, active transactions %d\n",
 		     __func__, proc->pid, threads, nodes, incoming_refs,
 		     outgoing_refs, active_transactions);
 
-	kfree(proc);
+	binder_proc_dec_tmpref(proc);
 }
 
 static void binder_deferred_func(struct work_struct *work)
@@ -3370,6 +3545,7 @@ binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer)
 static void print_binder_transaction(struct seq_file *m, const char *prefix,
 				     struct binder_transaction *t)
 {
+	spin_lock(&t->lock);
 	seq_printf(m,
 		   "%s %d: %p from %d:%d to %d:%d code %x flags %x pri %ld r%d",
 		   prefix, t->debug_id, t,
@@ -3378,6 +3554,8 @@ static void print_binder_transaction(struct seq_file *m, const char *prefix,
 		   t->to_proc ? t->to_proc->pid : 0,
 		   t->to_thread ? t->to_thread->pid : 0,
 		   t->code, t->flags, t->priority, t->need_reply);
+	spin_unlock(&t->lock);
+
 	if (t->buffer == NULL) {
 		seq_puts(m, " buffer free\n");
 		return;
@@ -3442,9 +3620,10 @@ static void print_binder_thread(struct seq_file *m,
 	size_t start_pos = m->count;
 	size_t header_pos;
 
-	seq_printf(m, "  thread %d: l %02x need_return %d\n",
+	seq_printf(m, "  thread %d: l %02x need_return %d tr %d\n",
 			thread->pid, thread->looper,
-			thread->looper_need_return);
+			thread->looper_need_return,
+			atomic_read(&thread->tmp_ref));
 	header_pos = m->count;
 	t = thread->transaction_stack;
 	while (t) {
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 24/37] binder: refactor binder ref inc/dec for thread safety
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (22 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 23/37] binder: make sure accesses to proc/thread are safe Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:01 ` [PATCH 25/37] binder: use node->tmp_refs to ensure node safety Todd Kjos
                   ` (13 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Once locks are added, binder_ref's will only be accessed
safely with the proc lock held. Refactor the inc/dec paths
to make them atomic with the binder_get_ref* paths and
node inc/dec. For example, instead of:

  ref = binder_get_ref(proc, handle, strong);
  ...
  binder_dec_ref(ref, strong);

we now have:

  ret = binder_dec_ref_for_handle(proc, handle, strong, &rdata);

Since the actual ref is no longer exposed to callers, a
new struct binder_ref_data is introduced which can be used
to return a copy of ref state.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c       | 484 ++++++++++++++++++++++++++++++-----------
 drivers/android/binder_trace.h |  32 +--
 2 files changed, 379 insertions(+), 137 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index ca7d866b89e8..4d0b99862339 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -291,20 +291,51 @@ struct binder_ref_death {
 	binder_uintptr_t cookie;
 };
 
+/**
+ * struct binder_ref_data - binder_ref counts and id
+ * @debug_id:        unique ID for the ref
+ * @desc:            unique userspace handle for ref
+ * @strong:          strong ref count (debugging only if not locked)
+ * @weak:            weak ref count (debugging only if not locked)
+ *
+ * Structure to hold ref count and ref id information. Since
+ * the actual ref can only be accessed with a lock, this structure
+ * is used to return information about the ref to callers of
+ * ref inc/dec functions.
+ */
+struct binder_ref_data {
+	int debug_id;
+	uint32_t desc;
+	int strong;
+	int weak;
+};
+
+/**
+ * struct binder_ref - struct to track references on nodes
+ * @data:        binder_ref_data containing id, handle, and current refcounts
+ * @rb_node_desc: node for lookup by @data.desc in proc's rb_tree
+ * @rb_node_node: node for lookup by @node in proc's rb_tree
+ * @node_entry:  list entry for node->refs list in target node
+ * @proc:        binder_proc containing ref
+ * @node:        binder_node of target node. When cleaning up a
+ *               ref for deletion in binder_cleanup_ref, a non-NULL
+ *               @node indicates the node must be freed
+ * @death:       pointer to death notification (ref_death) if requested
+ *
+ * Structure to track references from procA to target node (on procB). This
+ * structure is unsafe to access without holding @proc->outer_lock.
+ */
 struct binder_ref {
 	/* Lookups needed: */
 	/*   node + proc => ref (transaction) */
 	/*   desc + proc => ref (transaction, inc/dec ref) */
 	/*   node => refs + procs (proc exit) */
-	int debug_id;
+	struct binder_ref_data data;
 	struct rb_node rb_node_desc;
 	struct rb_node rb_node_node;
 	struct hlist_node node_entry;
 	struct binder_proc *proc;
 	struct binder_node *node;
-	uint32_t desc;
-	int strong;
-	int weak;
 	struct binder_ref_death *death;
 };
 
@@ -627,11 +658,11 @@ static struct binder_ref *binder_get_ref(struct binder_proc *proc,
 	while (n) {
 		ref = rb_entry(n, struct binder_ref, rb_node_desc);
 
-		if (desc < ref->desc) {
+		if (desc < ref->data.desc) {
 			n = n->rb_left;
-		} else if (desc > ref->desc) {
+		} else if (desc > ref->data.desc) {
 			n = n->rb_right;
-		} else if (need_strong_ref && !ref->strong) {
+		} else if (need_strong_ref && !ref->data.strong) {
 			binder_user_error("tried to use weak ref as strong ref\n");
 			return NULL;
 		} else {
@@ -641,14 +672,33 @@ static struct binder_ref *binder_get_ref(struct binder_proc *proc,
 	return NULL;
 }
 
+/**
+ * binder_get_ref_for_node() - get the ref associated with given node
+ * @proc:	binder_proc that owns the ref
+ * @node:	binder_node of target
+ * @new_ref:	newly allocated binder_ref to be initialized or %NULL
+ *
+ * Look up the ref for the given node and return it if it exists
+ *
+ * If it doesn't exist and the caller provides a newly allocated
+ * ref, initialize the fields of the newly allocated ref and insert
+ * into the given proc rb_trees and node refs list.
+ *
+ * Return:	the ref for node. It is possible that another thread
+ *		allocated/initialized the ref first in which case the
+ *		returned ref would be different than the passed-in
+ *		new_ref. new_ref must be kfree'd by the caller in
+ *		this case.
+ */
 static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
-						  struct binder_node *node)
+						  struct binder_node *node,
+						  struct binder_ref *new_ref)
 {
-	struct rb_node *n;
+	struct binder_context *context = proc->context;
 	struct rb_node **p = &proc->refs_by_node.rb_node;
 	struct rb_node *parent = NULL;
-	struct binder_ref *ref, *new_ref;
-	struct binder_context *context = proc->context;
+	struct binder_ref *ref;
+	struct rb_node *n;
 
 	while (*p) {
 		parent = *p;
@@ -661,22 +711,22 @@ static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
 		else
 			return ref;
 	}
-	new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
-	if (new_ref == NULL)
+	if (!new_ref)
 		return NULL;
+
 	binder_stats_created(BINDER_STAT_REF);
-	new_ref->debug_id = atomic_inc_return(&binder_last_id);
+	new_ref->data.debug_id = atomic_inc_return(&binder_last_id);
 	new_ref->proc = proc;
 	new_ref->node = node;
 	rb_link_node(&new_ref->rb_node_node, parent, p);
 	rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);
 
-	new_ref->desc = (node == context->binder_context_mgr_node) ? 0 : 1;
+	new_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1;
 	for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
 		ref = rb_entry(n, struct binder_ref, rb_node_desc);
-		if (ref->desc > new_ref->desc)
+		if (ref->data.desc > new_ref->data.desc)
 			break;
-		new_ref->desc = ref->desc + 1;
+		new_ref->data.desc = ref->data.desc + 1;
 	}
 
 	p = &proc->refs_by_desc.rb_node;
@@ -684,9 +734,9 @@ static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
 		parent = *p;
 		ref = rb_entry(parent, struct binder_ref, rb_node_desc);
 
-		if (new_ref->desc < ref->desc)
+		if (new_ref->data.desc < ref->data.desc)
 			p = &(*p)->rb_left;
-		else if (new_ref->desc > ref->desc)
+		else if (new_ref->data.desc > ref->data.desc)
 			p = &(*p)->rb_right;
 		else
 			BUG();
@@ -697,89 +747,267 @@ static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
 
 	binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 		     "%d new ref %d desc %d for node %d\n",
-		      proc->pid, new_ref->debug_id, new_ref->desc,
+		      proc->pid, new_ref->data.debug_id, new_ref->data.desc,
 		      node->debug_id);
 	return new_ref;
 }
 
-static void binder_delete_ref(struct binder_ref *ref)
+static void binder_cleanup_ref(struct binder_ref *ref)
 {
 	binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 		     "%d delete ref %d desc %d for node %d\n",
-		      ref->proc->pid, ref->debug_id, ref->desc,
+		      ref->proc->pid, ref->data.debug_id, ref->data.desc,
 		      ref->node->debug_id);
 
 	rb_erase(&ref->rb_node_desc, &ref->proc->refs_by_desc);
 	rb_erase(&ref->rb_node_node, &ref->proc->refs_by_node);
-	if (ref->strong)
+
+	if (ref->data.strong)
 		binder_dec_node(ref->node, 1, 1);
+
 	hlist_del(&ref->node_entry);
 	binder_dec_node(ref->node, 0, 1);
+
 	if (ref->death) {
 		binder_debug(BINDER_DEBUG_DEAD_BINDER,
 			     "%d delete ref %d desc %d has death notification\n",
-			      ref->proc->pid, ref->debug_id, ref->desc);
+			      ref->proc->pid, ref->data.debug_id,
+			      ref->data.desc);
 		list_del(&ref->death->work.entry);
-		kfree(ref->death);
 		binder_stats_deleted(BINDER_STAT_DEATH);
 	}
-	kfree(ref);
 	binder_stats_deleted(BINDER_STAT_REF);
 }
 
+/**
+ * binder_inc_ref() - increment the ref for given handle
+ * @ref:         ref to be incremented
+ * @strong:      if true, strong increment, else weak
+ * @target_list: list to queue node work on
+ *
+ * Increment the ref.
+ *
+ * Return: 0, if successful, else errno
+ */
 static int binder_inc_ref(struct binder_ref *ref, int strong,
 			  struct list_head *target_list)
 {
 	int ret;
 
 	if (strong) {
-		if (ref->strong == 0) {
+		if (ref->data.strong == 0) {
 			ret = binder_inc_node(ref->node, 1, 1, target_list);
 			if (ret)
 				return ret;
 		}
-		ref->strong++;
+		ref->data.strong++;
 	} else {
-		if (ref->weak == 0) {
+		if (ref->data.weak == 0) {
 			ret = binder_inc_node(ref->node, 0, 1, target_list);
 			if (ret)
 				return ret;
 		}
-		ref->weak++;
+		ref->data.weak++;
 	}
 	return 0;
 }
 
-
-static int binder_dec_ref(struct binder_ref *ref, int strong)
+/**
+ * binder_dec_ref() - dec the ref for given handle
+ * @ref:	ref to be decremented
+ * @strong:	if true, strong decrement, else weak
+ *
+ * Decrement the ref.
+ *
+ * TODO: kfree is avoided here since an upcoming patch
+ * will put this under a lock.
+ *
+ * Return: true if ref is cleaned up and ready to be freed
+ */
+static bool binder_dec_ref(struct binder_ref *ref, int strong)
 {
 	if (strong) {
-		if (ref->strong == 0) {
+		if (ref->data.strong == 0) {
 			binder_user_error("%d invalid dec strong, ref %d desc %d s %d w %d\n",
-					  ref->proc->pid, ref->debug_id,
-					  ref->desc, ref->strong, ref->weak);
-			return -EINVAL;
+					  ref->proc->pid, ref->data.debug_id,
+					  ref->data.desc, ref->data.strong,
+					  ref->data.weak);
+			return false;
 		}
-		ref->strong--;
-		if (ref->strong == 0) {
+		ref->data.strong--;
+		if (ref->data.strong == 0) {
 			int ret;
 
 			ret = binder_dec_node(ref->node, strong, 1);
 			if (ret)
-				return ret;
+				return false;
 		}
 	} else {
-		if (ref->weak == 0) {
+		if (ref->data.weak == 0) {
 			binder_user_error("%d invalid dec weak, ref %d desc %d s %d w %d\n",
-					  ref->proc->pid, ref->debug_id,
-					  ref->desc, ref->strong, ref->weak);
-			return -EINVAL;
+					  ref->proc->pid, ref->data.debug_id,
+					  ref->data.desc, ref->data.strong,
+					  ref->data.weak);
+			return false;
 		}
-		ref->weak--;
+		ref->data.weak--;
 	}
-	if (ref->strong == 0 && ref->weak == 0)
-		binder_delete_ref(ref);
-	return 0;
+	if (ref->data.strong == 0 && ref->data.weak == 0) {
+		binder_cleanup_ref(ref);
+		/*
+		 * TODO: we could kfree(ref) here, but an upcoming
+		 * patch will call this with a lock held, so we
+		 * return an indication that the ref should be
+		 * freed.
+		 */
+		return true;
+	}
+	return false;
+}
+
+/**
+ * binder_get_node_from_ref() - get the node from the given proc/desc
+ * @proc:	proc containing the ref
+ * @desc:	the handle associated with the ref
+ * @need_strong_ref: if true, only return node if ref is strong
+ * @rdata:	the id/refcount data for the ref
+ *
+ * Given a proc and ref handle, return the associated binder_node
+ *
+ * Return: a binder_node or NULL if not found or not strong when strong required
+ */
+static struct binder_node *binder_get_node_from_ref(
+		struct binder_proc *proc,
+		u32 desc, bool need_strong_ref,
+		struct binder_ref_data *rdata)
+{
+	struct binder_node *node;
+	struct binder_ref *ref;
+
+	ref = binder_get_ref(proc, desc, need_strong_ref);
+	if (!ref)
+		goto err_no_ref;
+	node = ref->node;
+	if (rdata)
+		*rdata = ref->data;
+
+	return node;
+
+err_no_ref:
+	return NULL;
+}
+
+/**
+ * binder_free_ref() - free the binder_ref
+ * @ref:	ref to free
+ *
+ * Free the binder_ref and the binder_ref_death indicated by ref->death.
+ */
+static void binder_free_ref(struct binder_ref *ref)
+{
+	kfree(ref->death);
+	kfree(ref);
+}
+
+/**
+ * binder_update_ref_for_handle() - inc/dec the ref for given handle
+ * @proc:	proc containing the ref
+ * @desc:	the handle associated with the ref
+ * @increment:	true=inc reference, false=dec reference
+ * @strong:	true=strong reference, false=weak reference
+ * @rdata:	the id/refcount data for the ref
+ *
+ * Given a proc and ref handle, increment or decrement the ref
+ * according to "increment" arg.
+ *
+ * Return: 0 if successful, else errno
+ */
+static int binder_update_ref_for_handle(struct binder_proc *proc,
+		uint32_t desc, bool increment, bool strong,
+		struct binder_ref_data *rdata)
+{
+	int ret = 0;
+	struct binder_ref *ref;
+	bool delete_ref = false;
+
+	ref = binder_get_ref(proc, desc, strong);
+	if (!ref) {
+		ret = -EINVAL;
+		goto err_no_ref;
+	}
+	if (increment)
+		ret = binder_inc_ref(ref, strong, NULL);
+	else
+		delete_ref = binder_dec_ref(ref, strong);
+
+	if (rdata)
+		*rdata = ref->data;
+
+	if (delete_ref)
+		binder_free_ref(ref);
+	return ret;
+
+err_no_ref:
+	return ret;
+}
+
+/**
+ * binder_dec_ref_for_handle() - dec the ref for given handle
+ * @proc:	proc containing the ref
+ * @desc:	the handle associated with the ref
+ * @strong:	true=strong reference, false=weak reference
+ * @rdata:	the id/refcount data for the ref
+ *
+ * Just calls binder_update_ref_for_handle() to decrement the ref.
+ *
+ * Return: 0 if successful, else errno
+ */
+static int binder_dec_ref_for_handle(struct binder_proc *proc,
+		uint32_t desc, bool strong, struct binder_ref_data *rdata)
+{
+	return binder_update_ref_for_handle(proc, desc, false, strong, rdata);
+}
+
+
+/**
+ * binder_inc_ref_for_node() - increment the ref for given proc/node
+ * @proc:	 proc containing the ref
+ * @node:	 target node
+ * @strong:	 true=strong reference, false=weak reference
+ * @target_list: worklist to use if node is incremented
+ * @rdata:	 the id/refcount data for the ref
+ *
+ * Given a proc and node, increment the ref. Create the ref if it
+ * doesn't already exist
+ *
+ * Return: 0 if successful, else errno
+ */
+static int binder_inc_ref_for_node(struct binder_proc *proc,
+			struct binder_node *node,
+			bool strong,
+			struct list_head *target_list,
+			struct binder_ref_data *rdata)
+{
+	struct binder_ref *ref;
+	struct binder_ref *new_ref = NULL;
+	int ret = 0;
+
+	ref = binder_get_ref_for_node(proc, node, NULL);
+	if (!ref) {
+		new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
+		if (!new_ref)
+			return -ENOMEM;
+		ref = binder_get_ref_for_node(proc, node, new_ref);
+	}
+	ret = binder_inc_ref(ref, strong, target_list);
+	*rdata = ref->data;
+	if (new_ref && ref != new_ref)
+		/*
+		 * Another thread created the ref first so
+		 * free the one we allocated
+		 */
+		kfree(new_ref);
+	return ret;
 }
 
 static void binder_pop_transaction(struct binder_thread *target_thread,
@@ -1124,20 +1352,21 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
 		case BINDER_TYPE_HANDLE:
 		case BINDER_TYPE_WEAK_HANDLE: {
 			struct flat_binder_object *fp;
-			struct binder_ref *ref;
+			struct binder_ref_data rdata;
+			int ret;
 
 			fp = to_flat_binder_object(hdr);
-			ref = binder_get_ref(proc, fp->handle,
-					     hdr->type == BINDER_TYPE_HANDLE);
-			if (ref == NULL) {
-				pr_err("transaction release %d bad handle %d\n",
-				 debug_id, fp->handle);
+			ret = binder_dec_ref_for_handle(proc, fp->handle,
+				hdr->type == BINDER_TYPE_HANDLE, &rdata);
+
+			if (ret) {
+				pr_err("transaction release %d bad handle %d, ret = %d\n",
+				 debug_id, fp->handle, ret);
 				break;
 			}
 			binder_debug(BINDER_DEBUG_TRANSACTION,
-				     "        ref %d desc %d (node %d)\n",
-				     ref->debug_id, ref->desc, ref->node->debug_id);
-			binder_dec_ref(ref, hdr->type == BINDER_TYPE_HANDLE);
+				     "        ref %d desc %d\n",
+				     rdata.debug_id, rdata.desc);
 		} break;
 
 		case BINDER_TYPE_FD: {
@@ -1209,9 +1438,10 @@ static int binder_translate_binder(struct flat_binder_object *fp,
 				   struct binder_thread *thread)
 {
 	struct binder_node *node;
-	struct binder_ref *ref;
 	struct binder_proc *proc = thread->proc;
 	struct binder_proc *target_proc = t->to_proc;
+	struct binder_ref_data rdata;
+	int ret;
 
 	node = binder_get_node(proc, fp->binder);
 	if (!node) {
@@ -1232,25 +1462,25 @@ static int binder_translate_binder(struct flat_binder_object *fp,
 	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk))
 		return -EPERM;
 
-	ref = binder_get_ref_for_node(target_proc, node);
-	if (!ref)
-		return -ENOMEM;
+	ret = binder_inc_ref_for_node(target_proc, node,
+			fp->hdr.type == BINDER_TYPE_BINDER,
+			&thread->todo, &rdata);
+	if (ret)
+		return ret;
 
 	if (fp->hdr.type == BINDER_TYPE_BINDER)
 		fp->hdr.type = BINDER_TYPE_HANDLE;
 	else
 		fp->hdr.type = BINDER_TYPE_WEAK_HANDLE;
 	fp->binder = 0;
-	fp->handle = ref->desc;
+	fp->handle = rdata.desc;
 	fp->cookie = 0;
-	binder_inc_ref(ref, fp->hdr.type == BINDER_TYPE_HANDLE, &thread->todo);
 
-	trace_binder_transaction_node_to_ref(t, node, ref);
+	trace_binder_transaction_node_to_ref(t, node, &rdata);
 	binder_debug(BINDER_DEBUG_TRANSACTION,
 		     "        node %d u%016llx -> ref %d desc %d\n",
 		     node->debug_id, (u64)node->ptr,
-		     ref->debug_id, ref->desc);
-
+		     rdata.debug_id, rdata.desc);
 	return 0;
 }
 
@@ -1258,13 +1488,14 @@ static int binder_translate_handle(struct flat_binder_object *fp,
 				   struct binder_transaction *t,
 				   struct binder_thread *thread)
 {
-	struct binder_ref *ref;
 	struct binder_proc *proc = thread->proc;
 	struct binder_proc *target_proc = t->to_proc;
+	struct binder_node *node;
+	struct binder_ref_data src_rdata;
 
-	ref = binder_get_ref(proc, fp->handle,
-			     fp->hdr.type == BINDER_TYPE_HANDLE);
-	if (!ref) {
+	node = binder_get_node_from_ref(proc, fp->handle,
+			fp->hdr.type == BINDER_TYPE_HANDLE, &src_rdata);
+	if (!node) {
 		binder_user_error("%d:%d got transaction with invalid handle, %d\n",
 				  proc->pid, thread->pid, fp->handle);
 		return -EINVAL;
@@ -1272,37 +1503,41 @@ static int binder_translate_handle(struct flat_binder_object *fp,
 	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk))
 		return -EPERM;
 
-	if (ref->node->proc == target_proc) {
+	if (node->proc == target_proc) {
 		if (fp->hdr.type == BINDER_TYPE_HANDLE)
 			fp->hdr.type = BINDER_TYPE_BINDER;
 		else
 			fp->hdr.type = BINDER_TYPE_WEAK_BINDER;
-		fp->binder = ref->node->ptr;
-		fp->cookie = ref->node->cookie;
-		binder_inc_node(ref->node, fp->hdr.type == BINDER_TYPE_BINDER,
+		fp->binder = node->ptr;
+		fp->cookie = node->cookie;
+		binder_inc_node(node,
+				fp->hdr.type == BINDER_TYPE_BINDER,
 				0, NULL);
-		trace_binder_transaction_ref_to_node(t, ref);
+		trace_binder_transaction_ref_to_node(t, node, &src_rdata);
 		binder_debug(BINDER_DEBUG_TRANSACTION,
 			     "        ref %d desc %d -> node %d u%016llx\n",
-			     ref->debug_id, ref->desc, ref->node->debug_id,
-			     (u64)ref->node->ptr);
+			     src_rdata.debug_id, src_rdata.desc, node->debug_id,
+			     (u64)node->ptr);
 	} else {
-		struct binder_ref *new_ref;
+		int ret;
+		struct binder_ref_data dest_rdata;
 
-		new_ref = binder_get_ref_for_node(target_proc, ref->node);
-		if (!new_ref)
-			return -ENOMEM;
+		ret = binder_inc_ref_for_node(target_proc, node,
+				fp->hdr.type == BINDER_TYPE_HANDLE,
+				NULL, &dest_rdata);
+		if (ret)
+			return ret;
 
 		fp->binder = 0;
-		fp->handle = new_ref->desc;
+		fp->handle = dest_rdata.desc;
 		fp->cookie = 0;
-		binder_inc_ref(new_ref, fp->hdr.type == BINDER_TYPE_HANDLE,
-			       NULL);
-		trace_binder_transaction_ref_to_ref(t, ref, new_ref);
+		trace_binder_transaction_ref_to_ref(t, node, &src_rdata,
+						    &dest_rdata);
 		binder_debug(BINDER_DEBUG_TRANSACTION,
 			     "        ref %d desc %d -> ref %d desc %d (node %d)\n",
-			     ref->debug_id, ref->desc, new_ref->debug_id,
-			     new_ref->desc, ref->node->debug_id);
+			     src_rdata.debug_id, src_rdata.desc,
+			     dest_rdata.debug_id, dest_rdata.desc,
+			     node->debug_id);
 	}
 	return 0;
 }
@@ -2043,6 +2278,8 @@ static int binder_thread_write(struct binder_proc *proc,
 	void __user *end = buffer + size;
 
 	while (ptr < end && thread->return_error.cmd == BR_OK) {
+		int ret;
+
 		if (get_user(cmd, (uint32_t __user *)ptr))
 			return -EFAULT;
 		ptr += sizeof(uint32_t);
@@ -2058,62 +2295,61 @@ static int binder_thread_write(struct binder_proc *proc,
 		case BC_RELEASE:
 		case BC_DECREFS: {
 			uint32_t target;
-			struct binder_ref *ref = NULL;
 			const char *debug_string;
+			bool strong = cmd == BC_ACQUIRE || cmd == BC_RELEASE;
+			bool increment = cmd == BC_INCREFS || cmd == BC_ACQUIRE;
+			struct binder_ref_data rdata;
 
 			if (get_user(target, (uint32_t __user *)ptr))
 				return -EFAULT;
 
 			ptr += sizeof(uint32_t);
-			if (target == 0 &&
-			    (cmd == BC_INCREFS || cmd == BC_ACQUIRE)) {
+			ret = -1;
+			if (increment && !target) {
 				struct binder_node *ctx_mgr_node;
-
 				mutex_lock(&context->context_mgr_node_lock);
 				ctx_mgr_node = context->binder_context_mgr_node;
-				if (ctx_mgr_node) {
-					ref = binder_get_ref_for_node(proc,
-							ctx_mgr_node);
-					if (ref && ref->desc != target) {
-						binder_user_error("%d:%d tried to acquire reference to desc 0, got %d instead\n",
-							proc->pid, thread->pid,
-							ref->desc);
-					}
-				}
+				if (ctx_mgr_node)
+					ret = binder_inc_ref_for_node(
+							proc, ctx_mgr_node,
+							strong, NULL, &rdata);
 				mutex_unlock(&context->context_mgr_node_lock);
 			}
-			if (ref == NULL)
-				ref = binder_get_ref(proc, target,
-						     cmd == BC_ACQUIRE ||
-						     cmd == BC_RELEASE);
-			if (ref == NULL) {
-				binder_user_error("%d:%d refcount change on invalid ref %d\n",
-					proc->pid, thread->pid, target);
-				break;
+			if (ret)
+				ret = binder_update_ref_for_handle(
+						proc, target, increment, strong,
+						&rdata);
+			if (!ret && rdata.desc != target) {
+				binder_user_error("%d:%d tried to acquire reference to desc %d, got %d instead\n",
+					proc->pid, thread->pid,
+					target, rdata.desc);
 			}
 			switch (cmd) {
 			case BC_INCREFS:
 				debug_string = "IncRefs";
-				binder_inc_ref(ref, 0, NULL);
 				break;
 			case BC_ACQUIRE:
 				debug_string = "Acquire";
-				binder_inc_ref(ref, 1, NULL);
 				break;
 			case BC_RELEASE:
 				debug_string = "Release";
-				binder_dec_ref(ref, 1);
 				break;
 			case BC_DECREFS:
 			default:
 				debug_string = "DecRefs";
-				binder_dec_ref(ref, 0);
+				break;
+			}
+			if (ret) {
+				binder_user_error("%d:%d %s %d refcount change on invalid ref %d ret %d\n",
+					proc->pid, thread->pid, debug_string,
+					strong, target, ret);
 				break;
 			}
 			binder_debug(BINDER_DEBUG_USER_REFS,
-				     "%d:%d %s ref %d desc %d s %d w %d for node %d\n",
-				     proc->pid, thread->pid, debug_string, ref->debug_id,
-				     ref->desc, ref->strong, ref->weak, ref->node->debug_id);
+				     "%d:%d %s ref %d desc %d s %d w %d\n",
+				     proc->pid, thread->pid, debug_string,
+				     rdata.debug_id, rdata.desc, rdata.strong,
+				     rdata.weak);
 			break;
 		}
 		case BC_INCREFS_DONE:
@@ -2311,8 +2547,9 @@ static int binder_thread_write(struct binder_proc *proc,
 				     cmd == BC_REQUEST_DEATH_NOTIFICATION ?
 				     "BC_REQUEST_DEATH_NOTIFICATION" :
 				     "BC_CLEAR_DEATH_NOTIFICATION",
-				     (u64)cookie, ref->debug_id, ref->desc,
-				     ref->strong, ref->weak, ref->node->debug_id);
+				     (u64)cookie, ref->data.debug_id,
+				     ref->data.desc, ref->data.strong,
+				     ref->data.weak, ref->node->debug_id);
 
 			if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
 				if (ref->death) {
@@ -3473,7 +3710,8 @@ static void binder_deferred_release(struct binder_proc *proc)
 
 		ref = rb_entry(n, struct binder_ref, rb_node_desc);
 		outgoing_refs++;
-		binder_delete_ref(ref);
+		binder_cleanup_ref(ref);
+		binder_free_ref(ref);
 	}
 
 	binder_release_work(&proc->todo);
@@ -3675,9 +3913,11 @@ static void print_binder_node(struct seq_file *m, struct binder_node *node)
 
 static void print_binder_ref(struct seq_file *m, struct binder_ref *ref)
 {
-	seq_printf(m, "  ref %d: desc %d %snode %d s %d w %d d %p\n",
-		   ref->debug_id, ref->desc, ref->node->proc ? "" : "dead ",
-		   ref->node->debug_id, ref->strong, ref->weak, ref->death);
+	seq_printf(m, "  ref %d: desc %d %snode %d s %d w %d d %pK\n",
+		   ref->data.debug_id, ref->data.desc,
+		   ref->node->proc ? "" : "dead ",
+		   ref->node->debug_id, ref->data.strong,
+		   ref->data.weak, ref->death);
 }
 
 static void print_binder_proc(struct seq_file *m,
@@ -3844,8 +4084,8 @@ static void print_binder_proc_stats(struct seq_file *m,
 		struct binder_ref *ref = rb_entry(n, struct binder_ref,
 						  rb_node_desc);
 		count++;
-		strong += ref->strong;
-		weak += ref->weak;
+		strong += ref->data.strong;
+		weak += ref->data.weak;
 	}
 	seq_printf(m, "  refs: %d s %d w %d\n", count, strong, weak);
 
diff --git a/drivers/android/binder_trace.h b/drivers/android/binder_trace.h
index 50b0d21f42cf..7967db16ba5a 100644
--- a/drivers/android/binder_trace.h
+++ b/drivers/android/binder_trace.h
@@ -24,7 +24,7 @@ struct binder_buffer;
 struct binder_node;
 struct binder_proc;
 struct binder_alloc;
-struct binder_ref;
+struct binder_ref_data;
 struct binder_thread;
 struct binder_transaction;
 
@@ -147,8 +147,8 @@ TRACE_EVENT(binder_transaction_received,
 
 TRACE_EVENT(binder_transaction_node_to_ref,
 	TP_PROTO(struct binder_transaction *t, struct binder_node *node,
-		 struct binder_ref *ref),
-	TP_ARGS(t, node, ref),
+		 struct binder_ref_data *rdata),
+	TP_ARGS(t, node, rdata),
 
 	TP_STRUCT__entry(
 		__field(int, debug_id)
@@ -161,8 +161,8 @@ TRACE_EVENT(binder_transaction_node_to_ref,
 		__entry->debug_id = t->debug_id;
 		__entry->node_debug_id = node->debug_id;
 		__entry->node_ptr = node->ptr;
-		__entry->ref_debug_id = ref->debug_id;
-		__entry->ref_desc = ref->desc;
+		__entry->ref_debug_id = rdata->debug_id;
+		__entry->ref_desc = rdata->desc;
 	),
 	TP_printk("transaction=%d node=%d src_ptr=0x%016llx ==> dest_ref=%d dest_desc=%d",
 		  __entry->debug_id, __entry->node_debug_id,
@@ -171,8 +171,9 @@ TRACE_EVENT(binder_transaction_node_to_ref,
 );
 
 TRACE_EVENT(binder_transaction_ref_to_node,
-	TP_PROTO(struct binder_transaction *t, struct binder_ref *ref),
-	TP_ARGS(t, ref),
+	TP_PROTO(struct binder_transaction *t, struct binder_node *node,
+		 struct binder_ref_data *rdata),
+	TP_ARGS(t, node, rdata),
 
 	TP_STRUCT__entry(
 		__field(int, debug_id)
@@ -183,10 +184,10 @@ TRACE_EVENT(binder_transaction_ref_to_node,
 	),
 	TP_fast_assign(
 		__entry->debug_id = t->debug_id;
-		__entry->ref_debug_id = ref->debug_id;
-		__entry->ref_desc = ref->desc;
-		__entry->node_debug_id = ref->node->debug_id;
-		__entry->node_ptr = ref->node->ptr;
+		__entry->ref_debug_id = rdata->debug_id;
+		__entry->ref_desc = rdata->desc;
+		__entry->node_debug_id = node->debug_id;
+		__entry->node_ptr = node->ptr;
 	),
 	TP_printk("transaction=%d node=%d src_ref=%d src_desc=%d ==> dest_ptr=0x%016llx",
 		  __entry->debug_id, __entry->node_debug_id,
@@ -195,9 +196,10 @@ TRACE_EVENT(binder_transaction_ref_to_node,
 );
 
 TRACE_EVENT(binder_transaction_ref_to_ref,
-	TP_PROTO(struct binder_transaction *t, struct binder_ref *src_ref,
-		 struct binder_ref *dest_ref),
-	TP_ARGS(t, src_ref, dest_ref),
+	TP_PROTO(struct binder_transaction *t, struct binder_node *node,
+		 struct binder_ref_data *src_ref,
+		 struct binder_ref_data *dest_ref),
+	TP_ARGS(t, node, src_ref, dest_ref),
 
 	TP_STRUCT__entry(
 		__field(int, debug_id)
@@ -209,7 +211,7 @@ TRACE_EVENT(binder_transaction_ref_to_ref,
 	),
 	TP_fast_assign(
 		__entry->debug_id = t->debug_id;
-		__entry->node_debug_id = src_ref->node->debug_id;
+		__entry->node_debug_id = node->debug_id;
 		__entry->src_ref_debug_id = src_ref->debug_id;
 		__entry->src_ref_desc = src_ref->desc;
 		__entry->dest_ref_debug_id = dest_ref->debug_id;
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 25/37] binder: use node->tmp_refs to ensure node safety
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (23 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 24/37] binder: refactor binder ref inc/dec for thread safety Todd Kjos
@ 2017-06-29 19:01 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 26/37] binder: introduce locking helper functions Todd Kjos
                   ` (12 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:01 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

When obtaining a node via binder_get_node(),
binder_get_node_from_ref() or binder_new_node(),
increment node->tmp_refs to take a
temporary reference on the node to ensure the node
persists while being used.  binder_put_node() must
be called to remove the temporary reference.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 124 +++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 104 insertions(+), 20 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 4d0b99862339..ec050c6d1192 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -274,6 +274,7 @@ struct binder_node {
 	int internal_strong_refs;
 	int local_weak_refs;
 	int local_strong_refs;
+	int tmp_refs;
 	binder_uintptr_t ptr;
 	binder_uintptr_t cookie;
 	unsigned has_strong_ref:1;
@@ -427,6 +428,7 @@ static void
 binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
 static void binder_free_thread(struct binder_thread *thread);
 static void binder_free_proc(struct binder_proc *proc);
+static void binder_inc_node_tmpref(struct binder_node *node);
 
 static int task_get_unused_fd_flags(struct binder_proc *proc, int flags)
 {
@@ -521,8 +523,15 @@ static struct binder_node *binder_get_node(struct binder_proc *proc,
 			n = n->rb_left;
 		else if (ptr > node->ptr)
 			n = n->rb_right;
-		else
+		else {
+			/*
+			 * take an implicit weak reference
+			 * to ensure node stays alive until
+			 * call to binder_put_node()
+			 */
+			binder_inc_node_tmpref(node);
 			return node;
+		}
 	}
 	return NULL;
 }
@@ -551,6 +560,7 @@ static struct binder_node *binder_new_node(struct binder_proc *proc,
 	if (node == NULL)
 		return NULL;
 	binder_stats_created(BINDER_STAT_NODE);
+	node->tmp_refs++;
 	rb_link_node(&node->rb_node, parent, p);
 	rb_insert_color(&node->rb_node, &proc->nodes);
 	node->debug_id = atomic_inc_return(&binder_last_id);
@@ -615,7 +625,8 @@ static int binder_dec_node(struct binder_node *node, int strong, int internal)
 	} else {
 		if (!internal)
 			node->local_weak_refs--;
-		if (node->local_weak_refs || !hlist_empty(&node->refs))
+		if (node->local_weak_refs || node->tmp_refs ||
+				!hlist_empty(&node->refs))
 			return 0;
 	}
 	if (node->proc && (node->has_strong_ref || node->has_weak_ref)) {
@@ -625,7 +636,7 @@ static int binder_dec_node(struct binder_node *node, int strong, int internal)
 		}
 	} else {
 		if (hlist_empty(&node->refs) && !node->local_strong_refs &&
-		    !node->local_weak_refs) {
+		    !node->local_weak_refs && !node->tmp_refs) {
 			list_del_init(&node->work.entry);
 			if (node->proc) {
 				rb_erase(&node->rb_node, &node->proc->nodes);
@@ -648,6 +659,46 @@ static int binder_dec_node(struct binder_node *node, int strong, int internal)
 	return 0;
 }
 
+/**
+ * binder_inc_node_tmpref() - take a temporary reference on node
+ * @node:	node to reference
+ *
+ * Take reference on node to prevent the node from being freed
+ * while referenced only by a local variable
+ */
+static void binder_inc_node_tmpref(struct binder_node *node)
+{
+	/*
+	 * No call to binder_inc_node() is needed since we
+	 * don't need to inform userspace of any changes to
+	 * tmp_refs
+	 */
+	node->tmp_refs++;
+}
+
+/**
+ * binder_dec_node_tmpref() - remove a temporary reference on node
+ * @node:	node to reference
+ *
+ * Release temporary reference on node taken via binder_inc_node_tmpref()
+ */
+static void binder_dec_node_tmpref(struct binder_node *node)
+{
+	node->tmp_refs--;
+	BUG_ON(node->tmp_refs < 0);
+	/*
+	 * Call binder_dec_node() to check if all refcounts are 0
+	 * and cleanup is needed. Calling with strong=0 and internal=1
+	 * causes no actual reference to be released in binder_dec_node().
+	 * If that changes, a change is needed here too.
+	 */
+	binder_dec_node(node, 0, 1);
+}
+
+static void binder_put_node(struct binder_node *node)
+{
+	binder_dec_node_tmpref(node);
+}
 
 static struct binder_ref *binder_get_ref(struct binder_proc *proc,
 					 u32 desc, bool need_strong_ref)
@@ -888,6 +939,11 @@ static struct binder_node *binder_get_node_from_ref(
 	if (!ref)
 		goto err_no_ref;
 	node = ref->node;
+	/*
+	 * Take an implicit reference on the node to ensure
+	 * it stays alive until the call to binder_put_node()
+	 */
+	binder_inc_node_tmpref(node);
 	if (rdata)
 		*rdata = ref->data;
 
@@ -1348,6 +1404,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
 				     node->debug_id, (u64)node->ptr);
 			binder_dec_node(node, hdr->type == BINDER_TYPE_BINDER,
 					0);
+			binder_put_node(node);
 		} break;
 		case BINDER_TYPE_HANDLE:
 		case BINDER_TYPE_WEAK_HANDLE: {
@@ -1441,7 +1498,7 @@ static int binder_translate_binder(struct flat_binder_object *fp,
 	struct binder_proc *proc = thread->proc;
 	struct binder_proc *target_proc = t->to_proc;
 	struct binder_ref_data rdata;
-	int ret;
+	int ret = 0;
 
 	node = binder_get_node(proc, fp->binder);
 	if (!node) {
@@ -1457,16 +1514,19 @@ static int binder_translate_binder(struct flat_binder_object *fp,
 				  proc->pid, thread->pid, (u64)fp->binder,
 				  node->debug_id, (u64)fp->cookie,
 				  (u64)node->cookie);
-		return -EINVAL;
+		ret = -EINVAL;
+		goto done;
+	}
+	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
+		ret = -EPERM;
+		goto done;
 	}
-	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk))
-		return -EPERM;
 
 	ret = binder_inc_ref_for_node(target_proc, node,
 			fp->hdr.type == BINDER_TYPE_BINDER,
 			&thread->todo, &rdata);
 	if (ret)
-		return ret;
+		goto done;
 
 	if (fp->hdr.type == BINDER_TYPE_BINDER)
 		fp->hdr.type = BINDER_TYPE_HANDLE;
@@ -1481,7 +1541,9 @@ static int binder_translate_binder(struct flat_binder_object *fp,
 		     "        node %d u%016llx -> ref %d desc %d\n",
 		     node->debug_id, (u64)node->ptr,
 		     rdata.debug_id, rdata.desc);
-	return 0;
+done:
+	binder_put_node(node);
+	return ret;
 }
 
 static int binder_translate_handle(struct flat_binder_object *fp,
@@ -1492,6 +1554,7 @@ static int binder_translate_handle(struct flat_binder_object *fp,
 	struct binder_proc *target_proc = t->to_proc;
 	struct binder_node *node;
 	struct binder_ref_data src_rdata;
+	int ret = 0;
 
 	node = binder_get_node_from_ref(proc, fp->handle,
 			fp->hdr.type == BINDER_TYPE_HANDLE, &src_rdata);
@@ -1500,8 +1563,10 @@ static int binder_translate_handle(struct flat_binder_object *fp,
 				  proc->pid, thread->pid, fp->handle);
 		return -EINVAL;
 	}
-	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk))
-		return -EPERM;
+	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
+		ret = -EPERM;
+		goto done;
+	}
 
 	if (node->proc == target_proc) {
 		if (fp->hdr.type == BINDER_TYPE_HANDLE)
@@ -1526,7 +1591,7 @@ static int binder_translate_handle(struct flat_binder_object *fp,
 				fp->hdr.type == BINDER_TYPE_HANDLE,
 				NULL, &dest_rdata);
 		if (ret)
-			return ret;
+			goto done;
 
 		fp->binder = 0;
 		fp->handle = dest_rdata.desc;
@@ -1539,7 +1604,9 @@ static int binder_translate_handle(struct flat_binder_object *fp,
 			     dest_rdata.debug_id, dest_rdata.desc,
 			     node->debug_id);
 	}
-	return 0;
+done:
+	binder_put_node(node);
+	return ret;
 }
 
 static int binder_translate_fd(int fd,
@@ -2381,6 +2448,7 @@ static int binder_thread_write(struct binder_proc *proc,
 					"BC_INCREFS_DONE" : "BC_ACQUIRE_DONE",
 					(u64)node_ptr, node->debug_id,
 					(u64)cookie, (u64)node->cookie);
+				binder_put_node(node);
 				break;
 			}
 			if (cmd == BC_ACQUIRE_DONE) {
@@ -2388,6 +2456,7 @@ static int binder_thread_write(struct binder_proc *proc,
 					binder_user_error("%d:%d BC_ACQUIRE_DONE node %d has no pending acquire request\n",
 						proc->pid, thread->pid,
 						node->debug_id);
+					binder_put_node(node);
 					break;
 				}
 				node->pending_strong_ref = 0;
@@ -2396,16 +2465,19 @@ static int binder_thread_write(struct binder_proc *proc,
 					binder_user_error("%d:%d BC_INCREFS_DONE node %d has no pending increfs request\n",
 						proc->pid, thread->pid,
 						node->debug_id);
+					binder_put_node(node);
 					break;
 				}
 				node->pending_weak_ref = 0;
 			}
 			binder_dec_node(node, cmd == BC_ACQUIRE_DONE, 0);
 			binder_debug(BINDER_DEBUG_USER_REFS,
-				     "%d:%d %s node %d ls %d lw %d\n",
+				     "%d:%d %s node %d ls %d lw %d tr %d\n",
 				     proc->pid, thread->pid,
 				     cmd == BC_INCREFS_DONE ? "BC_INCREFS_DONE" : "BC_ACQUIRE_DONE",
-				     node->debug_id, node->local_strong_refs, node->local_weak_refs);
+				     node->debug_id, node->local_strong_refs,
+				     node->local_weak_refs, node->tmp_refs);
+			binder_put_node(node);
 			break;
 		}
 		case BC_ATTEMPT_ACQUIRE:
@@ -2845,7 +2917,8 @@ static int binder_thread_read(struct binder_proc *proc,
 			strong = node->internal_strong_refs ||
 					node->local_strong_refs;
 			weak = !hlist_empty(&node->refs) ||
-					node->local_weak_refs || strong;
+					node->local_weak_refs ||
+					node->tmp_refs || strong;
 			has_strong_ref = node->has_strong_ref;
 			has_weak_ref = node->has_weak_ref;
 
@@ -3357,6 +3430,7 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp)
 	new_node->has_strong_ref = 1;
 	new_node->has_weak_ref = 1;
 	context->binder_context_mgr_node = new_node;
+	binder_put_node(new_node);
 out:
 	mutex_unlock(&context->context_mgr_node_lock);
 	return ret;
@@ -3615,8 +3689,11 @@ static int binder_node_release(struct binder_node *node, int refs)
 
 	list_del_init(&node->work.entry);
 	binder_release_work(&node->async_todo);
-
-	if (hlist_empty(&node->refs)) {
+	/*
+	 * The caller must have taken a temporary ref on the node,
+	 */
+	BUG_ON(!node->tmp_refs);
+	if (hlist_empty(&node->refs) && node->tmp_refs == 1) {
 		kfree(node);
 		binder_stats_deleted(BINDER_STAT_NODE);
 
@@ -3651,6 +3728,7 @@ static int binder_node_release(struct binder_node *node, int refs)
 	binder_debug(BINDER_DEBUG_DEAD_BINDER,
 		     "node %d now dead, refs %d, death %d\n",
 		     node->debug_id, refs, death);
+	binder_put_node(node);
 
 	return refs;
 }
@@ -3700,6 +3778,12 @@ static void binder_deferred_release(struct binder_proc *proc)
 
 		node = rb_entry(n, struct binder_node, rb_node);
 		nodes++;
+		/*
+		 * take a temporary ref on the node before
+		 * calling binder_node_release() which will either
+		 * kfree() the node or call binder_put_node()
+		 */
+		binder_inc_node_tmpref(node);
 		rb_erase(&node->rb_node, &proc->nodes);
 		incoming_refs = binder_node_release(node, incoming_refs);
 	}
@@ -3895,11 +3979,11 @@ static void print_binder_node(struct seq_file *m, struct binder_node *node)
 	hlist_for_each_entry(ref, &node->refs, node_entry)
 		count++;
 
-	seq_printf(m, "  node %d: u%016llx c%016llx hs %d hw %d ls %d lw %d is %d iw %d",
+	seq_printf(m, "  node %d: u%016llx c%016llx hs %d hw %d ls %d lw %d is %d iw %d tr %d",
 		   node->debug_id, (u64)node->ptr, (u64)node->cookie,
 		   node->has_strong_ref, node->has_weak_ref,
 		   node->local_strong_refs, node->local_weak_refs,
-		   node->internal_strong_refs, count);
+		   node->internal_strong_refs, count, node->tmp_refs);
 	if (count) {
 		seq_puts(m, " proc");
 		hlist_for_each_entry(ref, &node->refs, node_entry)
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 26/37] binder: introduce locking helper functions
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (24 preceding siblings ...)
  2017-06-29 19:01 ` [PATCH 25/37] binder: use node->tmp_refs to ensure node safety Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 27/37] binder: use inner lock to sync work dq and node counts Todd Kjos
                   ` (11 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

There are 3 main spinlocks which must be acquired in this
order:
1) proc->outer_lock : protects most fields of binder_proc,
	binder_thread, and binder_ref structures. binder_proc_lock()
	and binder_proc_unlock() are used to acq/rel.
2) node->lock : protects most fields of binder_node.
	binder_node_lock() and binder_node_unlock() are
	used to acq/rel
3) proc->inner_lock : protects the thread and node lists
	(proc->threads, proc->nodes) and all todo lists associated
	with the binder_proc (proc->todo, thread->todo,
	proc->delivered_death and node->async_todo).
	binder_inner_proc_lock() and binder_inner_proc_unlock()
	are used to acq/rel

Any lock under procA must never be nested under any lock at the same
level or below on procB.

Functions that require a lock held on entry indicate which lock
in the suffix of the function name:

foo_olocked() : requires node->outer_lock
foo_nlocked() : requires node->lock
foo_ilocked() : requires proc->inner_lock
foo_iolocked(): requires proc->outer_lock and proc->inner_lock
foo_nilocked(): requires node->lock and proc->inner_lock

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 238 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 238 insertions(+)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index ec050c6d1192..91fece5c067f 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -15,6 +15,39 @@
  *
  */
 
+/*
+ * Locking overview
+ *
+ * There are 3 main spinlocks which must be acquired in the
+ * order shown:
+ *
+ * 1) proc->outer_lock : protects binder_ref
+ *    binder_proc_lock() and binder_proc_unlock() are
+ *    used to acq/rel.
+ * 2) node->lock : protects most fields of binder_node.
+ *    binder_node_lock() and binder_node_unlock() are
+ *    used to acq/rel
+ * 3) proc->inner_lock : protects the thread and node lists
+ *    (proc->threads, proc->nodes) and all todo lists associated
+ *    with the binder_proc (proc->todo, thread->todo,
+ *    proc->delivered_death and node->async_todo).
+ *    binder_inner_proc_lock() and binder_inner_proc_unlock()
+ *    are used to acq/rel
+ *
+ * Any lock under procA must never be nested under any lock at the same
+ * level or below on procB.
+ *
+ * Functions that require a lock held on entry indicate which lock
+ * in the suffix of the function name:
+ *
+ * foo_olocked() : requires node->outer_lock
+ * foo_nlocked() : requires node->lock
+ * foo_ilocked() : requires proc->inner_lock
+ * foo_oilocked(): requires proc->outer_lock and proc->inner_lock
+ * foo_nilocked(): requires node->lock and proc->inner_lock
+ * ...
+ */
+
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
 #include <asm/cacheflush.h>
@@ -36,6 +69,7 @@
 #include <linux/uaccess.h>
 #include <linux/pid_namespace.h>
 #include <linux/security.h>
+#include <linux/spinlock.h>
 
 #ifdef CONFIG_ANDROID_BINDER_IPC_32BIT
 #define BINDER_IPC_32BIT 1
@@ -106,6 +140,7 @@ enum {
 	BINDER_DEBUG_FREE_BUFFER            = 1U << 11,
 	BINDER_DEBUG_INTERNAL_REFS          = 1U << 12,
 	BINDER_DEBUG_PRIORITY_CAP           = 1U << 13,
+	BINDER_DEBUG_SPINLOCKS              = 1U << 14,
 };
 static uint32_t binder_debug_mask = BINDER_DEBUG_USER_ERROR |
 	BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION;
@@ -262,8 +297,43 @@ struct binder_error {
 	uint32_t cmd;
 };
 
+/**
+ * struct binder_node - binder node bookkeeping
+ * @debug_id:             unique ID for debugging
+ *                        (invariant after initialized)
+ * @lock:                 lock for node fields
+ * @work:                 worklist element for node work
+ * @rb_node:              element for proc->nodes tree
+ * @dead_node:            element for binder_dead_nodes list
+ *                        (protected by binder_dead_nodes_lock)
+ * @proc:                 binder_proc that owns this node
+ *                        (invariant after initialized)
+ * @refs:                 list of references on this node
+ * @internal_strong_refs: used to take strong references when
+ *                        initiating a transaction
+ * @local_weak_refs:      weak user refs from local process
+ * @local_strong_refs:    strong user refs from local process
+ * @tmp_refs:             temporary kernel refs
+ * @ptr:                  userspace pointer for node
+ *                        (invariant, no lock needed)
+ * @cookie:               userspace cookie for node
+ *                        (invariant, no lock needed)
+ * @has_strong_ref:       userspace notified of strong ref
+ * @pending_strong_ref:   userspace has acked notification of strong ref
+ * @has_weak_ref:         userspace notified of weak ref
+ * @pending_weak_ref:     userspace has acked notification of weak ref
+ * @has_async_transaction: async transaction to node in progress
+ * @accept_fds:           file descriptor operations supported for node
+ *                        (invariant after initialized)
+ * @min_priority:         minimum scheduling priority
+ *                        (invariant after initialized)
+ * @async_todo:           list of async work items
+ *
+ * Bookkeeping structure for binder nodes.
+ */
 struct binder_node {
 	int debug_id;
+	spinlock_t lock;
 	struct binder_work work;
 	union {
 		struct rb_node rb_node;
@@ -346,6 +416,51 @@ enum binder_deferred_state {
 	BINDER_DEFERRED_RELEASE      = 0x04,
 };
 
+/**
+ * struct binder_proc - binder process bookkeeping
+ * @proc_node:            element for binder_procs list
+ * @threads:              rbtree of binder_threads in this proc
+ * @nodes:                rbtree of binder nodes associated with
+ *                        this proc ordered by node->ptr
+ * @refs_by_desc:         rbtree of refs ordered by ref->desc
+ * @refs_by_node:         rbtree of refs ordered by ref->node
+ * @pid                   PID of group_leader of process
+ *                        (invariant after initialized)
+ * @tsk                   task_struct for group_leader of process
+ *                        (invariant after initialized)
+ * @files                 files_struct for process
+ *                        (invariant after initialized)
+ * @deferred_work_node:   element for binder_deferred_list
+ *                        (protected by binder_deferred_lock)
+ * @deferred_work:        bitmap of deferred work to perform
+ *                        (protected by binder_deferred_lock)
+ * @is_dead:              process is dead and awaiting free
+ *                        when outstanding transactions are cleaned up
+ * @todo:                 list of work for this process
+ * @wait:                 wait queue head to wait for proc work
+ *                        (invariant after initialized)
+ * @stats:                per-process binder statistics
+ *                        (atomics, no lock needed)
+ * @delivered_death:      list of delivered death notification
+ * @max_threads:          cap on number of binder threads
+ * @requested_threads:    number of binder threads requested but not
+ *                        yet started. In current implementation, can
+ *                        only be 0 or 1.
+ * @requested_threads_started: number binder threads started
+ * @ready_threads:        number of threads waiting for proc work
+ * @tmp_ref:              temporary reference to indicate proc is in use
+ * @default_priority:     default scheduler priority
+ *                        (invariant after initialized)
+ * @debugfs_entry:        debugfs node
+ * @alloc:                binder allocator bookkeeping
+ * @context:              binder_context for this proc
+ *                        (invariant after initialized)
+ * @inner_lock:           can nest under outer_lock and/or node lock
+ * @outer_lock:           no nesting under innor or node lock
+ *                        Lock order: 1) outer, 2) node, 3) inner
+ *
+ * Bookkeeping structure for binder processes
+ */
 struct binder_proc {
 	struct hlist_node proc_node;
 	struct rb_root threads;
@@ -372,6 +487,8 @@ struct binder_proc {
 	struct dentry *debugfs_entry;
 	struct binder_alloc alloc;
 	struct binder_context *context;
+	spinlock_t inner_lock;
+	spinlock_t outer_lock;
 };
 
 enum {
@@ -382,6 +499,33 @@ enum {
 	BINDER_LOOPER_STATE_WAITING     = 0x10,
 };
 
+/**
+ * struct binder_thread - binder thread bookkeeping
+ * @proc:                 binder process for this thread
+ *                        (invariant after initialization)
+ * @rb_node:              element for proc->threads rbtree
+ * @pid:                  PID for this thread
+ *                        (invariant after initialization)
+ * @looper:               bitmap of looping state
+ *                        (only accessed by this thread)
+ * @looper_needs_return:  looping thread needs to exit driver
+ *                        (no lock needed)
+ * @transaction_stack:    stack of in-progress transactions for this thread
+ * @todo:                 list of work to do for this thread
+ * @return_error:         transaction errors reported by this thread
+ *                        (only accessed by this thread)
+ * @reply_error:          transaction errors reported by target thread
+ * @wait:                 wait queue for thread work
+ * @stats:                per-thread statistics
+ *                        (atomics, no lock needed)
+ * @tmp_ref:              temporary reference to indicate thread is in use
+ *                        (atomic since @proc->inner_lock cannot
+ *                        always be acquired)
+ * @is_dead:              thread is dead and awaiting free
+ *                        when outstanding transactions are cleaned up
+ *
+ * Bookkeeping structure for binder threads.
+ */
 struct binder_thread {
 	struct binder_proc *proc;
 	struct rb_node rb_node;
@@ -424,6 +568,97 @@ struct binder_transaction {
 	spinlock_t lock;
 };
 
+/**
+ * binder_proc_lock() - Acquire outer lock for given binder_proc
+ * @proc:         struct binder_proc to acquire
+ *
+ * Acquires proc->outer_lock. Used to protect binder_ref
+ * structures associated with the given proc.
+ */
+#define binder_proc_lock(proc) _binder_proc_lock(proc, __LINE__)
+static void
+_binder_proc_lock(struct binder_proc *proc, int line)
+{
+	binder_debug(BINDER_DEBUG_SPINLOCKS,
+		     "%s: line=%d\n", __func__, line);
+	spin_lock(&proc->outer_lock);
+}
+
+/**
+ * binder_proc_unlock() - Release spinlock for given binder_proc
+ * @proc:         struct binder_proc to acquire
+ *
+ * Release lock acquired via binder_proc_lock()
+ */
+#define binder_proc_unlock(_proc) _binder_proc_unlock(_proc, __LINE__)
+static void
+_binder_proc_unlock(struct binder_proc *proc, int line)
+{
+	binder_debug(BINDER_DEBUG_SPINLOCKS,
+		     "%s: line=%d\n", __func__, line);
+	spin_unlock(&proc->outer_lock);
+}
+
+/**
+ * binder_inner_proc_lock() - Acquire inner lock for given binder_proc
+ * @proc:         struct binder_proc to acquire
+ *
+ * Acquires proc->inner_lock. Used to protect todo lists
+ */
+#define binder_inner_proc_lock(proc) _binder_inner_proc_lock(proc, __LINE__)
+static void
+_binder_inner_proc_lock(struct binder_proc *proc, int line)
+{
+	binder_debug(BINDER_DEBUG_SPINLOCKS,
+		     "%s: line=%d\n", __func__, line);
+	spin_lock(&proc->inner_lock);
+}
+
+/**
+ * binder_inner_proc_unlock() - Release inner lock for given binder_proc
+ * @proc:         struct binder_proc to acquire
+ *
+ * Release lock acquired via binder_inner_proc_lock()
+ */
+#define binder_inner_proc_unlock(proc) _binder_inner_proc_unlock(proc, __LINE__)
+static void
+_binder_inner_proc_unlock(struct binder_proc *proc, int line)
+{
+	binder_debug(BINDER_DEBUG_SPINLOCKS,
+		     "%s: line=%d\n", __func__, line);
+	spin_unlock(&proc->inner_lock);
+}
+
+/**
+ * binder_node_lock() - Acquire spinlock for given binder_node
+ * @node:         struct binder_node to acquire
+ *
+ * Acquires node->lock. Used to protect binder_node fields
+ */
+#define binder_node_lock(node) _binder_node_lock(node, __LINE__)
+static void
+_binder_node_lock(struct binder_node *node, int line)
+{
+	binder_debug(BINDER_DEBUG_SPINLOCKS,
+		     "%s: line=%d\n", __func__, line);
+	spin_lock(&node->lock);
+}
+
+/**
+ * binder_node_unlock() - Release spinlock for given binder_proc
+ * @node:         struct binder_node to acquire
+ *
+ * Release lock acquired via binder_node_lock()
+ */
+#define binder_node_unlock(node) _binder_node_unlock(node, __LINE__)
+static void
+_binder_node_unlock(struct binder_node *node, int line)
+{
+	binder_debug(BINDER_DEBUG_SPINLOCKS,
+		     "%s: line=%d\n", __func__, line);
+	spin_unlock(&node->lock);
+}
+
 static void
 binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
 static void binder_free_thread(struct binder_thread *thread);
@@ -568,6 +803,7 @@ static struct binder_node *binder_new_node(struct binder_proc *proc,
 	node->ptr = ptr;
 	node->cookie = cookie;
 	node->work.type = BINDER_WORK_NODE;
+	spin_lock_init(&node->lock);
 	INIT_LIST_HEAD(&node->work.entry);
 	INIT_LIST_HEAD(&node->async_todo);
 	binder_debug(BINDER_DEBUG_INTERNAL_REFS,
@@ -3599,6 +3835,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
 	proc = kzalloc(sizeof(*proc), GFP_KERNEL);
 	if (proc == NULL)
 		return -ENOMEM;
+	spin_lock_init(&proc->inner_lock);
+	spin_lock_init(&proc->outer_lock);
 	get_task_struct(current->group_leader);
 	proc->tsk = current->group_leader;
 	INIT_LIST_HEAD(&proc->todo);
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 27/37] binder: use inner lock to sync work dq and node counts
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (25 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 26/37] binder: introduce locking helper functions Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 28/37] binder: add spinlocks to protect todo lists Todd Kjos
                   ` (10 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

For correct behavior we need to hold the inner lock when
dequeuing and processing node work in binder_thread_read.
We now hold the inner lock when we enter the switch statement
and release it after processing anything that might be
affected by other threads.

We also need to hold the inner lock to protect the node
weak/strong ref tracking fields as long as node->proc
is non-NULL (if it is NULL then we are guaranteed that
we don't have any node work queued).

This means that other functions that manipulate these fields
must hold the inner lock. Refactored these functions to use
the inner lock.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 249 +++++++++++++++++++++++++++++++++++++----------
 1 file changed, 198 insertions(+), 51 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 91fece5c067f..6c741416fa00 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -311,17 +311,36 @@ struct binder_error {
  * @refs:                 list of references on this node
  * @internal_strong_refs: used to take strong references when
  *                        initiating a transaction
+ *                        (protected by @proc->inner_lock if @proc
+ *                        and by @lock)
  * @local_weak_refs:      weak user refs from local process
+ *                        (protected by @proc->inner_lock if @proc
+ *                        and by @lock)
  * @local_strong_refs:    strong user refs from local process
+ *                        (protected by @proc->inner_lock if @proc
+ *                        and by @lock)
  * @tmp_refs:             temporary kernel refs
+ *                        (protected by @proc->inner_lock while @proc
+ *                        is valid, and by binder_dead_nodes_lock
+ *                        if @proc is NULL. During inc/dec and node release
+ *                        it is also protected by @lock to provide safety
+ *                        as the node dies and @proc becomes NULL)
  * @ptr:                  userspace pointer for node
  *                        (invariant, no lock needed)
  * @cookie:               userspace cookie for node
  *                        (invariant, no lock needed)
  * @has_strong_ref:       userspace notified of strong ref
+ *                        (protected by @proc->inner_lock if @proc
+ *                        and by @lock)
  * @pending_strong_ref:   userspace has acked notification of strong ref
+ *                        (protected by @proc->inner_lock if @proc
+ *                        and by @lock)
  * @has_weak_ref:         userspace notified of weak ref
+ *                        (protected by @proc->inner_lock if @proc
+ *                        and by @lock)
  * @pending_weak_ref:     userspace has acked notification of weak ref
+ *                        (protected by @proc->inner_lock if @proc
+ *                        and by @lock)
  * @has_async_transaction: async transaction to node in progress
  * @accept_fds:           file descriptor operations supported for node
  *                        (invariant after initialized)
@@ -347,13 +366,24 @@ struct binder_node {
 	int tmp_refs;
 	binder_uintptr_t ptr;
 	binder_uintptr_t cookie;
-	unsigned has_strong_ref:1;
-	unsigned pending_strong_ref:1;
-	unsigned has_weak_ref:1;
-	unsigned pending_weak_ref:1;
-	unsigned has_async_transaction:1;
-	unsigned accept_fds:1;
-	unsigned min_priority:8;
+	struct {
+		/*
+		 * bitfield elements protected by
+		 * proc inner_lock
+		 */
+		u8 has_strong_ref:1;
+		u8 pending_strong_ref:1;
+		u8 has_weak_ref:1;
+		u8 pending_weak_ref:1;
+	};
+	struct {
+		/*
+		 * invariant after initialization
+		 */
+		u8 accept_fds:1;
+		u8 min_priority;
+	};
+	bool has_async_transaction;
 	struct list_head async_todo;
 };
 
@@ -813,9 +843,18 @@ static struct binder_node *binder_new_node(struct binder_proc *proc,
 	return node;
 }
 
-static int binder_inc_node(struct binder_node *node, int strong, int internal,
-			   struct list_head *target_list)
+static void binder_free_node(struct binder_node *node)
 {
+	kfree(node);
+	binder_stats_deleted(BINDER_STAT_NODE);
+}
+
+static int binder_inc_node_ilocked(struct binder_node *node, int strong,
+				   int internal,
+				   struct list_head *target_list)
+{
+	if (node->proc)
+		BUG_ON(!spin_is_locked(&node->proc->inner_lock));
 	if (strong) {
 		if (internal) {
 			if (target_list == NULL &&
@@ -849,23 +888,43 @@ static int binder_inc_node(struct binder_node *node, int strong, int internal,
 	return 0;
 }
 
-static int binder_dec_node(struct binder_node *node, int strong, int internal)
+static int binder_inc_node(struct binder_node *node, int strong, int internal,
+			   struct list_head *target_list)
+{
+	int ret;
+
+	if (node->proc)
+		binder_inner_proc_lock(node->proc);
+	ret = binder_inc_node_ilocked(node, strong, internal, target_list);
+	if (node->proc)
+		binder_inner_proc_unlock(node->proc);
+
+	return ret;
+}
+
+static bool binder_dec_node_ilocked(struct binder_node *node,
+				    int strong, int internal)
 {
+	struct binder_proc *proc = node->proc;
+
+	if (proc)
+		BUG_ON(!spin_is_locked(&proc->inner_lock));
 	if (strong) {
 		if (internal)
 			node->internal_strong_refs--;
 		else
 			node->local_strong_refs--;
 		if (node->local_strong_refs || node->internal_strong_refs)
-			return 0;
+			return false;
 	} else {
 		if (!internal)
 			node->local_weak_refs--;
 		if (node->local_weak_refs || node->tmp_refs ||
 				!hlist_empty(&node->refs))
-			return 0;
+			return false;
 	}
-	if (node->proc && (node->has_strong_ref || node->has_weak_ref)) {
+
+	if (proc && (node->has_strong_ref || node->has_weak_ref)) {
 		if (list_empty(&node->work.entry)) {
 			list_add_tail(&node->work.entry, &node->proc->todo);
 			wake_up_interruptible(&node->proc->wait);
@@ -874,25 +933,55 @@ static int binder_dec_node(struct binder_node *node, int strong, int internal)
 		if (hlist_empty(&node->refs) && !node->local_strong_refs &&
 		    !node->local_weak_refs && !node->tmp_refs) {
 			list_del_init(&node->work.entry);
-			if (node->proc) {
+			if (proc) {
 				rb_erase(&node->rb_node, &node->proc->nodes);
 				binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 					     "refless node %d deleted\n",
 					     node->debug_id);
 			} else {
 				spin_lock(&binder_dead_nodes_lock);
+				/*
+				 * tmp_refs could have changed so
+				 * check it again
+				 */
+				if (node->tmp_refs) {
+					spin_unlock(&binder_dead_nodes_lock);
+					return false;
+				}
 				hlist_del(&node->dead_node);
 				spin_unlock(&binder_dead_nodes_lock);
 				binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 					     "dead node %d deleted\n",
 					     node->debug_id);
 			}
-			kfree(node);
-			binder_stats_deleted(BINDER_STAT_NODE);
+			return true;
 		}
 	}
+	return false;
+}
 
-	return 0;
+static void binder_dec_node(struct binder_node *node, int strong, int internal)
+{
+	bool free_node;
+
+	if (node->proc)
+		binder_inner_proc_lock(node->proc);
+	free_node = binder_dec_node_ilocked(node, strong, internal);
+	if (node->proc)
+		binder_inner_proc_unlock(node->proc);
+
+	if (free_node)
+		binder_free_node(node);
+}
+
+static void binder_inc_node_tmpref_ilocked(struct binder_node *node)
+{
+	/*
+	 * No call to binder_inc_node() is needed since we
+	 * don't need to inform userspace of any changes to
+	 * tmp_refs
+	 */
+	node->tmp_refs++;
 }
 
 /**
@@ -900,16 +989,25 @@ static int binder_dec_node(struct binder_node *node, int strong, int internal)
  * @node:	node to reference
  *
  * Take reference on node to prevent the node from being freed
- * while referenced only by a local variable
+ * while referenced only by a local variable. The inner lock is
+ * needed to serialize with the node work on the queue (which
+ * isn't needed after the node is dead). If the node is dead
+ * (node->proc is NULL), use binder_dead_nodes_lock to protect
+ * node->tmp_refs against dead-node-only cases where the node
+ * lock cannot be acquired (eg traversing the dead node list to
+ * print nodes)
  */
 static void binder_inc_node_tmpref(struct binder_node *node)
 {
-	/*
-	 * No call to binder_inc_node() is needed since we
-	 * don't need to inform userspace of any changes to
-	 * tmp_refs
-	 */
-	node->tmp_refs++;
+	if (node->proc)
+		binder_inner_proc_lock(node->proc);
+	else
+		spin_lock(&binder_dead_nodes_lock);
+	binder_inc_node_tmpref_ilocked(node);
+	if (node->proc)
+		binder_inner_proc_unlock(node->proc);
+	else
+		spin_unlock(&binder_dead_nodes_lock);
 }
 
 /**
@@ -920,15 +1018,27 @@ static void binder_inc_node_tmpref(struct binder_node *node)
  */
 static void binder_dec_node_tmpref(struct binder_node *node)
 {
+	bool free_node;
+
+	if (node->proc)
+		binder_inner_proc_lock(node->proc);
+	else
+		spin_lock(&binder_dead_nodes_lock);
 	node->tmp_refs--;
 	BUG_ON(node->tmp_refs < 0);
+	if (!node->proc)
+		spin_unlock(&binder_dead_nodes_lock);
 	/*
 	 * Call binder_dec_node() to check if all refcounts are 0
 	 * and cleanup is needed. Calling with strong=0 and internal=1
 	 * causes no actual reference to be released in binder_dec_node().
 	 * If that changes, a change is needed here too.
 	 */
-	binder_dec_node(node, 0, 1);
+	free_node = binder_dec_node_ilocked(node, 0, 1);
+	if (node->proc)
+		binder_inner_proc_unlock(node->proc);
+	if (free_node)
+		binder_free_node(node);
 }
 
 static void binder_put_node(struct binder_node *node)
@@ -1041,6 +1151,9 @@ static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
 
 static void binder_cleanup_ref(struct binder_ref *ref)
 {
+	bool delete_node = false;
+	struct binder_proc *node_proc = ref->node->proc;
+
 	binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 		     "%d delete ref %d desc %d for node %d\n",
 		      ref->proc->pid, ref->data.debug_id, ref->data.desc,
@@ -1049,11 +1162,26 @@ static void binder_cleanup_ref(struct binder_ref *ref)
 	rb_erase(&ref->rb_node_desc, &ref->proc->refs_by_desc);
 	rb_erase(&ref->rb_node_node, &ref->proc->refs_by_node);
 
+	if (node_proc)
+		binder_inner_proc_lock(node_proc);
 	if (ref->data.strong)
-		binder_dec_node(ref->node, 1, 1);
+		binder_dec_node_ilocked(ref->node, 1, 1);
 
 	hlist_del(&ref->node_entry);
-	binder_dec_node(ref->node, 0, 1);
+	delete_node = binder_dec_node_ilocked(ref->node, 0, 1);
+	if (node_proc)
+		binder_inner_proc_unlock(node_proc);
+	/*
+	 * Clear ref->node unless we want the caller to free the node
+	 */
+	if (!delete_node) {
+		/*
+		 * The caller uses ref->node to determine
+		 * whether the node needs to be freed. Clear
+		 * it since the node is still alive.
+		 */
+		ref->node = NULL;
+	}
 
 	if (ref->death) {
 		binder_debug(BINDER_DEBUG_DEAD_BINDER,
@@ -1122,13 +1250,8 @@ static bool binder_dec_ref(struct binder_ref *ref, int strong)
 			return false;
 		}
 		ref->data.strong--;
-		if (ref->data.strong == 0) {
-			int ret;
-
-			ret = binder_dec_node(ref->node, strong, 1);
-			if (ret)
-				return false;
-		}
+		if (ref->data.strong == 0)
+			binder_dec_node(ref->node, strong, 1);
 	} else {
 		if (ref->data.weak == 0) {
 			binder_user_error("%d invalid dec weak, ref %d desc %d s %d w %d\n",
@@ -1193,10 +1316,13 @@ static struct binder_node *binder_get_node_from_ref(
  * binder_free_ref() - free the binder_ref
  * @ref:	ref to free
  *
- * Free the binder_ref and the binder_ref_death indicated by ref->death.
+ * Free the binder_ref. Free the binder_node indicated by ref->node
+ * (if non-NULL) and the binder_ref_death indicated by ref->death.
  */
 static void binder_free_ref(struct binder_ref *ref)
 {
+	if (ref->node)
+		binder_free_node(ref->node);
 	kfree(ref->death);
 	kfree(ref);
 }
@@ -2687,11 +2813,13 @@ static int binder_thread_write(struct binder_proc *proc,
 				binder_put_node(node);
 				break;
 			}
+			binder_inner_proc_lock(proc);
 			if (cmd == BC_ACQUIRE_DONE) {
 				if (node->pending_strong_ref == 0) {
 					binder_user_error("%d:%d BC_ACQUIRE_DONE node %d has no pending acquire request\n",
 						proc->pid, thread->pid,
 						node->debug_id);
+					binder_inner_proc_unlock(proc);
 					binder_put_node(node);
 					break;
 				}
@@ -2701,11 +2829,13 @@ static int binder_thread_write(struct binder_proc *proc,
 					binder_user_error("%d:%d BC_INCREFS_DONE node %d has no pending increfs request\n",
 						proc->pid, thread->pid,
 						node->debug_id);
+					binder_inner_proc_unlock(proc);
 					binder_put_node(node);
 					break;
 				}
 				node->pending_weak_ref = 0;
 			}
+			binder_inner_proc_unlock(proc);
 			binder_dec_node(node, cmd == BC_ACQUIRE_DONE, 0);
 			binder_debug(BINDER_DEBUG_USER_REFS,
 				     "%d:%d %s node %d ls %d lw %d tr %d\n",
@@ -3091,6 +3221,7 @@ static int binder_thread_read(struct binder_proc *proc,
 		struct binder_transaction *t = NULL;
 		struct binder_thread *t_from;
 
+		binder_inner_proc_lock(proc);
 		if (!list_empty(&thread->todo)) {
 			w = list_first_entry(&thread->todo, struct binder_work,
 					     entry);
@@ -3104,11 +3235,15 @@ static int binder_thread_read(struct binder_proc *proc,
 			break;
 		}
 
-		if (end - ptr < sizeof(tr) + 4)
+		if (end - ptr < sizeof(tr) + 4) {
+			binder_inner_proc_unlock(proc);
 			break;
+		}
+		list_del_init(&w->entry);
 
 		switch (w->type) {
 		case BINDER_WORK_TRANSACTION: {
+			binder_inner_proc_unlock(proc);
 			t = container_of(w, struct binder_transaction, work);
 		} break;
 		case BINDER_WORK_RETURN_ERROR: {
@@ -3116,15 +3251,16 @@ static int binder_thread_read(struct binder_proc *proc,
 					w, struct binder_error, work);
 
 			WARN_ON(e->cmd == BR_OK);
+			binder_inner_proc_unlock(proc);
 			if (put_user(e->cmd, (uint32_t __user *)ptr))
 				return -EFAULT;
 			e->cmd = BR_OK;
 			ptr += sizeof(uint32_t);
 
 			binder_stat_br(proc, thread, cmd);
-			list_del(&w->entry);
 		} break;
 		case BINDER_WORK_TRANSACTION_COMPLETE: {
+			binder_inner_proc_unlock(proc);
 			cmd = BR_TRANSACTION_COMPLETE;
 			if (put_user(cmd, (uint32_t __user *)ptr))
 				return -EFAULT;
@@ -3134,8 +3270,6 @@ static int binder_thread_read(struct binder_proc *proc,
 			binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
 				     "%d:%d BR_TRANSACTION_COMPLETE\n",
 				     proc->pid, thread->pid);
-
-			list_del(&w->entry);
 			kfree(w);
 			binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
 		} break;
@@ -3172,8 +3306,6 @@ static int binder_thread_read(struct binder_proc *proc,
 				node->has_strong_ref = 0;
 			if (!weak && has_weak_ref)
 				node->has_weak_ref = 0;
-			list_del(&w->entry);
-
 			if (!weak && !strong) {
 				binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 					     "%d:%d node %d u%016llx c%016llx deleted\n",
@@ -3182,9 +3314,11 @@ static int binder_thread_read(struct binder_proc *proc,
 					     (u64)node_ptr,
 					     (u64)node_cookie);
 				rb_erase(&node->rb_node, &proc->nodes);
-				kfree(node);
-				binder_stats_deleted(BINDER_STAT_NODE);
-			}
+				binder_inner_proc_unlock(proc);
+				binder_free_node(node);
+			} else
+				binder_inner_proc_unlock(proc);
+
 			if (weak && !has_weak_ref)
 				ret = binder_put_node_cmd(
 						proc, thread, &ptr, node_ptr,
@@ -3226,6 +3360,13 @@ static int binder_thread_read(struct binder_proc *proc,
 				cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;
 			else
 				cmd = BR_DEAD_BINDER;
+			/*
+			 * TODO: there is a race condition between
+			 * death notification requests and delivery
+			 * of the notifications. This will be handled
+			 * in a later patch.
+			 */
+			binder_inner_proc_unlock(proc);
 			if (put_user(cmd, (uint32_t __user *)ptr))
 				return -EFAULT;
 			ptr += sizeof(uint32_t);
@@ -3243,11 +3384,14 @@ static int binder_thread_read(struct binder_proc *proc,
 				      (u64)death->cookie);
 
 			if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
-				list_del(&w->entry);
 				kfree(death);
 				binder_stats_deleted(BINDER_STAT_DEATH);
-			} else
-				list_move(&w->entry, &proc->delivered_death);
+			} else {
+				binder_inner_proc_lock(proc);
+				list_add_tail(&w->entry,
+					      &proc->delivered_death);
+				binder_inner_proc_unlock(proc);
+			}
 			if (cmd == BR_DEAD_BINDER)
 				goto done; /* DEAD_BINDER notifications can cause transactions */
 		} break;
@@ -3325,7 +3469,6 @@ static int binder_thread_read(struct binder_proc *proc,
 
 		if (t_from)
 			binder_thread_dec_tmpref(t_from);
-		list_del(&t->work.entry);
 		t->buffer->allow_user_free = 1;
 		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
 			t->to_parent = thread->transaction_stack;
@@ -3924,16 +4067,19 @@ static int binder_node_release(struct binder_node *node, int refs)
 {
 	struct binder_ref *ref;
 	int death = 0;
+	struct binder_proc *proc = node->proc;
 
-	list_del_init(&node->work.entry);
 	binder_release_work(&node->async_todo);
+
+	binder_inner_proc_lock(proc);
+	list_del_init(&node->work.entry);
 	/*
 	 * The caller must have taken a temporary ref on the node,
 	 */
 	BUG_ON(!node->tmp_refs);
 	if (hlist_empty(&node->refs) && node->tmp_refs == 1) {
-		kfree(node);
-		binder_stats_deleted(BINDER_STAT_NODE);
+		binder_inner_proc_unlock(proc);
+		binder_free_node(node);
 
 		return refs;
 	}
@@ -3941,6 +4087,7 @@ static int binder_node_release(struct binder_node *node, int refs)
 	node->proc = NULL;
 	node->local_strong_refs = 0;
 	node->local_weak_refs = 0;
+	binder_inner_proc_unlock(proc);
 
 	spin_lock(&binder_dead_nodes_lock);
 	hlist_add_head(&node->dead_node, &binder_dead_nodes);
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 28/37] binder: add spinlocks to protect todo lists
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (26 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 27/37] binder: use inner lock to sync work dq and node counts Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 29/37] binder: add spinlock to protect binder_node Todd Kjos
                   ` (9 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

The todo lists in the proc, thread, and node structures
are accessed by other procs/threads to place work
items on the queue.

The todo lists are protected by the new proc->inner_lock.
No locks should ever be nested under these locks. As the
name suggests, an outer lock will be introduced in
a later patch.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 355 +++++++++++++++++++++++++++++++++++------------
 1 file changed, 269 insertions(+), 86 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 6c741416fa00..5a0389767843 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -279,8 +279,16 @@ struct binder_device {
 	struct binder_context context;
 };
 
+/**
+ * struct binder_work - work enqueued on a worklist
+ * @entry:             node enqueued on list
+ * @type:              type of work to be performed
+ *
+ * There are separate work lists for proc, thread, and node (async).
+ */
 struct binder_work {
 	struct list_head entry;
+
 	enum {
 		BINDER_WORK_TRANSACTION = 1,
 		BINDER_WORK_TRANSACTION_COMPLETE,
@@ -303,6 +311,7 @@ struct binder_error {
  *                        (invariant after initialized)
  * @lock:                 lock for node fields
  * @work:                 worklist element for node work
+ *                        (protected by @proc->inner_lock)
  * @rb_node:              element for proc->nodes tree
  * @dead_node:            element for binder_dead_nodes list
  *                        (protected by binder_dead_nodes_lock)
@@ -347,6 +356,7 @@ struct binder_error {
  * @min_priority:         minimum scheduling priority
  *                        (invariant after initialized)
  * @async_todo:           list of async work items
+ *                        (protected by @proc->inner_lock)
  *
  * Bookkeeping structure for binder nodes.
  */
@@ -388,6 +398,11 @@ struct binder_node {
 };
 
 struct binder_ref_death {
+	/**
+	 * @work: worklist element for death notifications
+	 *        (protected by inner_lock of the proc that
+	 *        this ref belongs to)
+	 */
 	struct binder_work work;
 	binder_uintptr_t cookie;
 };
@@ -467,11 +482,13 @@ enum binder_deferred_state {
  * @is_dead:              process is dead and awaiting free
  *                        when outstanding transactions are cleaned up
  * @todo:                 list of work for this process
+ *                        (protected by @inner_lock)
  * @wait:                 wait queue head to wait for proc work
  *                        (invariant after initialized)
  * @stats:                per-process binder statistics
  *                        (atomics, no lock needed)
  * @delivered_death:      list of delivered death notification
+ *                        (protected by @inner_lock)
  * @max_threads:          cap on number of binder threads
  * @requested_threads:    number of binder threads requested but not
  *                        yet started. In current implementation, can
@@ -542,6 +559,7 @@ enum {
  *                        (no lock needed)
  * @transaction_stack:    stack of in-progress transactions for this thread
  * @todo:                 list of work to do for this thread
+ *                        (protected by @proc->inner_lock)
  * @return_error:         transaction errors reported by this thread
  *                        (only accessed by this thread)
  * @reply_error:          transaction errors reported by target thread
@@ -689,6 +707,111 @@ _binder_node_unlock(struct binder_node *node, int line)
 	spin_unlock(&node->lock);
 }
 
+static bool binder_worklist_empty_ilocked(struct list_head *list)
+{
+	return list_empty(list);
+}
+
+/**
+ * binder_worklist_empty() - Check if no items on the work list
+ * @proc:       binder_proc associated with list
+ * @list:	list to check
+ *
+ * Return: true if there are no items on list, else false
+ */
+static bool binder_worklist_empty(struct binder_proc *proc,
+				  struct list_head *list)
+{
+	bool ret;
+
+	binder_inner_proc_lock(proc);
+	ret = binder_worklist_empty_ilocked(list);
+	binder_inner_proc_unlock(proc);
+	return ret;
+}
+
+static void
+binder_enqueue_work_ilocked(struct binder_work *work,
+			   struct list_head *target_list)
+{
+	BUG_ON(target_list == NULL);
+	BUG_ON(work->entry.next && !list_empty(&work->entry));
+	list_add_tail(&work->entry, target_list);
+}
+
+/**
+ * binder_enqueue_work() - Add an item to the work list
+ * @proc:         binder_proc associated with list
+ * @work:         struct binder_work to add to list
+ * @target_list:  list to add work to
+ *
+ * Adds the work to the specified list. Asserts that work
+ * is not already on a list.
+ */
+static void
+binder_enqueue_work(struct binder_proc *proc,
+		    struct binder_work *work,
+		    struct list_head *target_list)
+{
+	binder_inner_proc_lock(proc);
+	binder_enqueue_work_ilocked(work, target_list);
+	binder_inner_proc_unlock(proc);
+}
+
+static void
+binder_dequeue_work_ilocked(struct binder_work *work)
+{
+	list_del_init(&work->entry);
+}
+
+/**
+ * binder_dequeue_work() - Removes an item from the work list
+ * @proc:         binder_proc associated with list
+ * @work:         struct binder_work to remove from list
+ *
+ * Removes the specified work item from whatever list it is on.
+ * Can safely be called if work is not on any list.
+ */
+static void
+binder_dequeue_work(struct binder_proc *proc, struct binder_work *work)
+{
+	binder_inner_proc_lock(proc);
+	binder_dequeue_work_ilocked(work);
+	binder_inner_proc_unlock(proc);
+}
+
+static struct binder_work *binder_dequeue_work_head_ilocked(
+					struct list_head *list)
+{
+	struct binder_work *w;
+
+	w = list_first_entry_or_null(list, struct binder_work, entry);
+	if (w)
+		list_del_init(&w->entry);
+	return w;
+}
+
+/**
+ * binder_dequeue_work_head() - Dequeues the item at head of list
+ * @proc:         binder_proc associated with list
+ * @list:         list to dequeue head
+ *
+ * Removes the head of the list if there are items on the list
+ *
+ * Return: pointer dequeued binder_work, NULL if list was empty
+ */
+static struct binder_work *binder_dequeue_work_head(
+					struct binder_proc *proc,
+					struct list_head *list)
+{
+	struct binder_work *w;
+
+	binder_inner_proc_lock(proc);
+	w = binder_dequeue_work_head_ilocked(list);
+	binder_inner_proc_unlock(proc);
+	return w;
+}
+
 static void
 binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
 static void binder_free_thread(struct binder_thread *thread);
@@ -870,8 +993,8 @@ static int binder_inc_node_ilocked(struct binder_node *node, int strong,
 		} else
 			node->local_strong_refs++;
 		if (!node->has_strong_ref && target_list) {
-			list_del_init(&node->work.entry);
-			list_add_tail(&node->work.entry, target_list);
+			binder_dequeue_work_ilocked(&node->work);
+			binder_enqueue_work_ilocked(&node->work, target_list);
 		}
 	} else {
 		if (!internal)
@@ -882,7 +1005,7 @@ static int binder_inc_node_ilocked(struct binder_node *node, int strong,
 					node->debug_id);
 				return -EINVAL;
 			}
-			list_add_tail(&node->work.entry, target_list);
+			binder_enqueue_work_ilocked(&node->work, target_list);
 		}
 	}
 	return 0;
@@ -926,19 +1049,20 @@ static bool binder_dec_node_ilocked(struct binder_node *node,
 
 	if (proc && (node->has_strong_ref || node->has_weak_ref)) {
 		if (list_empty(&node->work.entry)) {
-			list_add_tail(&node->work.entry, &node->proc->todo);
+			binder_enqueue_work_ilocked(&node->work, &proc->todo);
 			wake_up_interruptible(&node->proc->wait);
 		}
 	} else {
 		if (hlist_empty(&node->refs) && !node->local_strong_refs &&
 		    !node->local_weak_refs && !node->tmp_refs) {
-			list_del_init(&node->work.entry);
 			if (proc) {
-				rb_erase(&node->rb_node, &node->proc->nodes);
+				binder_dequeue_work_ilocked(&node->work);
+				rb_erase(&node->rb_node, &proc->nodes);
 				binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 					     "refless node %d deleted\n",
 					     node->debug_id);
 			} else {
+				BUG_ON(!list_empty(&node->work.entry));
 				spin_lock(&binder_dead_nodes_lock);
 				/*
 				 * tmp_refs could have changed so
@@ -1188,7 +1312,7 @@ static void binder_cleanup_ref(struct binder_ref *ref)
 			     "%d delete ref %d desc %d has death notification\n",
 			      ref->proc->pid, ref->data.debug_id,
 			      ref->data.desc);
-		list_del(&ref->death->work.entry);
+		binder_dequeue_work(ref->proc, &ref->death->work);
 		binder_stats_deleted(BINDER_STAT_DEATH);
 	}
 	binder_stats_deleted(BINDER_STAT_REF);
@@ -1539,8 +1663,9 @@ static void binder_send_failed_reply(struct binder_transaction *t,
 			binder_pop_transaction(target_thread, t);
 			if (target_thread->reply_error.cmd == BR_OK) {
 				target_thread->reply_error.cmd = error_code;
-				list_add_tail(
-					&target_thread->reply_error.work.entry,
+				binder_enqueue_work(
+					target_thread->proc,
+					&target_thread->reply_error.work,
 					&target_thread->todo);
 				wake_up_interruptible(&target_thread->wait);
 			} else {
@@ -2578,7 +2703,7 @@ static void binder_transaction(struct binder_proc *proc,
 		}
 	}
 	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
-	list_add_tail(&tcomplete->entry, &thread->todo);
+	binder_enqueue_work(proc, tcomplete, &thread->todo);
 
 	if (reply) {
 		if (target_thread->is_dead)
@@ -2609,7 +2734,7 @@ static void binder_transaction(struct binder_proc *proc,
 			goto err_dead_proc_or_thread;
 	}
 	t->work.type = BINDER_WORK_TRANSACTION;
-	list_add_tail(&t->work.entry, target_list);
+	binder_enqueue_work(target_proc, &t->work, target_list);
 	if (target_wait) {
 		if (reply || !(tr->flags & TF_ONE_WAY))
 			wake_up_interruptible_sync(target_wait);
@@ -2685,13 +2810,15 @@ static void binder_transaction(struct binder_proc *proc,
 	BUG_ON(thread->return_error.cmd != BR_OK);
 	if (in_reply_to) {
 		thread->return_error.cmd = BR_TRANSACTION_COMPLETE;
-		list_add_tail(&thread->return_error.work.entry,
-			      &thread->todo);
+		binder_enqueue_work(thread->proc,
+				    &thread->return_error.work,
+				    &thread->todo);
 		binder_send_failed_reply(in_reply_to, return_error);
 	} else {
 		thread->return_error.cmd = return_error;
-		list_add_tail(&thread->return_error.work.entry,
-			      &thread->todo);
+		binder_enqueue_work(thread->proc,
+				    &thread->return_error.work,
+				    &thread->todo);
 	}
 }
 
@@ -2884,11 +3011,21 @@ static int binder_thread_write(struct binder_proc *proc,
 				buffer->transaction = NULL;
 			}
 			if (buffer->async_transaction && buffer->target_node) {
-				BUG_ON(!buffer->target_node->has_async_transaction);
-				if (list_empty(&buffer->target_node->async_todo))
-					buffer->target_node->has_async_transaction = 0;
+				struct binder_node *buf_node;
+				struct binder_work *w;
+
+				buf_node = buffer->target_node;
+				BUG_ON(!buf_node->has_async_transaction);
+				BUG_ON(buf_node->proc != proc);
+				binder_inner_proc_lock(proc);
+				w = binder_dequeue_work_head_ilocked(
+						&buf_node->async_todo);
+				if (!w)
+					buf_node->has_async_transaction = 0;
 				else
-					list_move_tail(buffer->target_node->async_todo.next, &thread->todo);
+					binder_enqueue_work_ilocked(
+							w, &thread->todo);
+				binder_inner_proc_unlock(proc);
 			}
 			trace_binder_transaction_buffer_release(buffer);
 			binder_transaction_buffer_release(proc, buffer, NULL);
@@ -3000,9 +3137,10 @@ static int binder_thread_write(struct binder_proc *proc,
 					WARN_ON(thread->return_error.cmd !=
 						BR_OK);
 					thread->return_error.cmd = BR_ERROR;
-					list_add_tail(
-					    &thread->return_error.work.entry,
-					    &thread->todo);
+					binder_enqueue_work(
+						thread->proc,
+						&thread->return_error.work,
+						&thread->todo);
 					binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
 						     "%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n",
 						     proc->pid, thread->pid);
@@ -3014,11 +3152,20 @@ static int binder_thread_write(struct binder_proc *proc,
 				ref->death = death;
 				if (ref->node->proc == NULL) {
 					ref->death->work.type = BINDER_WORK_DEAD_BINDER;
-					if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
-						list_add_tail(&ref->death->work.entry, &thread->todo);
-					} else {
-						list_add_tail(&ref->death->work.entry, &proc->todo);
-						wake_up_interruptible(&proc->wait);
+					if (thread->looper &
+					    (BINDER_LOOPER_STATE_REGISTERED |
+					     BINDER_LOOPER_STATE_ENTERED))
+						binder_enqueue_work(
+							proc,
+							&ref->death->work,
+							&thread->todo);
+					else {
+						binder_enqueue_work(
+							proc,
+							&ref->death->work,
+							&proc->todo);
+						wake_up_interruptible(
+								&proc->wait);
 					}
 				}
 			} else {
@@ -3036,18 +3183,27 @@ static int binder_thread_write(struct binder_proc *proc,
 					break;
 				}
 				ref->death = NULL;
+				binder_inner_proc_lock(proc);
 				if (list_empty(&death->work.entry)) {
 					death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION;
-					if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
-						list_add_tail(&death->work.entry, &thread->todo);
-					} else {
-						list_add_tail(&death->work.entry, &proc->todo);
-						wake_up_interruptible(&proc->wait);
+					if (thread->looper &
+					    (BINDER_LOOPER_STATE_REGISTERED |
+					     BINDER_LOOPER_STATE_ENTERED))
+						binder_enqueue_work_ilocked(
+								&death->work,
+								&thread->todo);
+					else {
+						binder_enqueue_work_ilocked(
+								&death->work,
+								&proc->todo);
+						wake_up_interruptible(
+								&proc->wait);
 					}
 				} else {
 					BUG_ON(death->work.type != BINDER_WORK_DEAD_BINDER);
 					death->work.type = BINDER_WORK_DEAD_BINDER_AND_CLEAR;
 				}
+				binder_inner_proc_unlock(proc);
 			}
 		} break;
 		case BC_DEAD_BINDER_DONE: {
@@ -3059,8 +3215,13 @@ static int binder_thread_write(struct binder_proc *proc,
 				return -EFAULT;
 
 			ptr += sizeof(cookie);
-			list_for_each_entry(w, &proc->delivered_death, entry) {
-				struct binder_ref_death *tmp_death = container_of(w, struct binder_ref_death, work);
+			binder_inner_proc_lock(proc);
+			list_for_each_entry(w, &proc->delivered_death,
+					    entry) {
+				struct binder_ref_death *tmp_death =
+					container_of(w,
+						     struct binder_ref_death,
+						     work);
 
 				if (tmp_death->cookie == cookie) {
 					death = tmp_death;
@@ -3074,19 +3235,25 @@ static int binder_thread_write(struct binder_proc *proc,
 			if (death == NULL) {
 				binder_user_error("%d:%d BC_DEAD_BINDER_DONE %016llx not found\n",
 					proc->pid, thread->pid, (u64)cookie);
+				binder_inner_proc_unlock(proc);
 				break;
 			}
-
-			list_del_init(&death->work.entry);
+			binder_dequeue_work_ilocked(&death->work);
 			if (death->work.type == BINDER_WORK_DEAD_BINDER_AND_CLEAR) {
 				death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION;
-				if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
-					list_add_tail(&death->work.entry, &thread->todo);
-				} else {
-					list_add_tail(&death->work.entry, &proc->todo);
+				if (thread->looper &
+					(BINDER_LOOPER_STATE_REGISTERED |
+					 BINDER_LOOPER_STATE_ENTERED))
+					binder_enqueue_work_ilocked(
+						&death->work, &thread->todo);
+				else {
+					binder_enqueue_work_ilocked(
+							&death->work,
+							&proc->todo);
 					wake_up_interruptible(&proc->wait);
 				}
 			}
+			binder_inner_proc_unlock(proc);
 		} break;
 
 		default:
@@ -3113,12 +3280,14 @@ static void binder_stat_br(struct binder_proc *proc,
 static int binder_has_proc_work(struct binder_proc *proc,
 				struct binder_thread *thread)
 {
-	return !list_empty(&proc->todo) || thread->looper_need_return;
+	return !binder_worklist_empty(proc, &proc->todo) ||
+		thread->looper_need_return;
 }
 
 static int binder_has_thread_work(struct binder_thread *thread)
 {
-	return !list_empty(&thread->todo) || thread->looper_need_return;
+	return !binder_worklist_empty(thread->proc, &thread->todo) ||
+		thread->looper_need_return;
 }
 
 static int binder_put_node_cmd(struct binder_proc *proc,
@@ -3172,7 +3341,7 @@ static int binder_thread_read(struct binder_proc *proc,
 
 retry:
 	wait_for_proc_work = thread->transaction_stack == NULL &&
-				list_empty(&thread->todo);
+		binder_worklist_empty(proc, &thread->todo);
 
 	thread->looper |= BINDER_LOOPER_STATE_WAITING;
 	if (wait_for_proc_work)
@@ -3182,7 +3351,7 @@ static int binder_thread_read(struct binder_proc *proc,
 
 	trace_binder_wait_for_work(wait_for_proc_work,
 				   !!thread->transaction_stack,
-				   !list_empty(&thread->todo));
+				   !binder_worklist_empty(proc, &thread->todo));
 	if (wait_for_proc_work) {
 		if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
 					BINDER_LOOPER_STATE_ENTERED))) {
@@ -3217,18 +3386,20 @@ static int binder_thread_read(struct binder_proc *proc,
 	while (1) {
 		uint32_t cmd;
 		struct binder_transaction_data tr;
-		struct binder_work *w;
+		struct binder_work *w = NULL;
+		struct list_head *list = NULL;
 		struct binder_transaction *t = NULL;
 		struct binder_thread *t_from;
 
 		binder_inner_proc_lock(proc);
-		if (!list_empty(&thread->todo)) {
-			w = list_first_entry(&thread->todo, struct binder_work,
-					     entry);
-		} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
-			w = list_first_entry(&proc->todo, struct binder_work,
-					     entry);
-		} else {
+		if (!binder_worklist_empty_ilocked(&thread->todo))
+			list = &thread->todo;
+		else if (!binder_worklist_empty_ilocked(&proc->todo) &&
+			   wait_for_proc_work)
+			list = &proc->todo;
+		else {
+			binder_inner_proc_unlock(proc);
+
 			/* no data added */
 			if (ptr - buffer == 4 && !thread->looper_need_return)
 				goto retry;
@@ -3239,7 +3410,7 @@ static int binder_thread_read(struct binder_proc *proc,
 			binder_inner_proc_unlock(proc);
 			break;
 		}
-		list_del_init(&w->entry);
+		w = binder_dequeue_work_head_ilocked(list);
 
 		switch (w->type) {
 		case BINDER_WORK_TRANSACTION: {
@@ -3388,8 +3559,8 @@ static int binder_thread_read(struct binder_proc *proc,
 				binder_stats_deleted(BINDER_STAT_DEATH);
 			} else {
 				binder_inner_proc_lock(proc);
-				list_add_tail(&w->entry,
-					      &proc->delivered_death);
+				binder_enqueue_work_ilocked(
+						w, &proc->delivered_death);
 				binder_inner_proc_unlock(proc);
 			}
 			if (cmd == BR_DEAD_BINDER)
@@ -3499,13 +3670,16 @@ static int binder_thread_read(struct binder_proc *proc,
 	return 0;
 }
 
-static void binder_release_work(struct list_head *list)
+static void binder_release_work(struct binder_proc *proc,
+				struct list_head *list)
 {
 	struct binder_work *w;
 
-	while (!list_empty(list)) {
-		w = list_first_entry(list, struct binder_work, entry);
-		list_del_init(&w->entry);
+	while (1) {
+		w = binder_dequeue_work_head(proc, list);
+		if (!w)
+			return;
+
 		switch (w->type) {
 		case BINDER_WORK_TRANSACTION: {
 			struct binder_transaction *t;
@@ -3669,7 +3843,7 @@ static int binder_thread_release(struct binder_proc *proc,
 
 	if (send_reply)
 		binder_send_failed_reply(send_reply, BR_DEAD_REPLY);
-	binder_release_work(&thread->todo);
+	binder_release_work(proc, &thread->todo);
 	binder_thread_dec_tmpref(thread);
 	return active_transactions;
 }
@@ -3686,7 +3860,7 @@ static unsigned int binder_poll(struct file *filp,
 	thread = binder_get_thread(proc);
 
 	wait_for_proc_work = thread->transaction_stack == NULL &&
-		list_empty(&thread->todo);
+		binder_worklist_empty(proc, &thread->todo);
 
 	binder_unlock(__func__);
 
@@ -3749,7 +3923,7 @@ static int binder_ioctl_write_read(struct file *filp,
 					 &bwr.read_consumed,
 					 filp->f_flags & O_NONBLOCK);
 		trace_binder_read_done(ret);
-		if (!list_empty(&proc->todo))
+		if (!binder_worklist_empty(proc, &proc->todo))
 			wake_up_interruptible(&proc->wait);
 		if (ret < 0) {
 			if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
@@ -4069,10 +4243,10 @@ static int binder_node_release(struct binder_node *node, int refs)
 	int death = 0;
 	struct binder_proc *proc = node->proc;
 
-	binder_release_work(&node->async_todo);
+	binder_release_work(proc, &node->async_todo);
 
 	binder_inner_proc_lock(proc);
-	list_del_init(&node->work.entry);
+	binder_dequeue_work_ilocked(&node->work);
 	/*
 	 * The caller must have taken a temporary ref on the node,
 	 */
@@ -4101,13 +4275,15 @@ static int binder_node_release(struct binder_node *node, int refs)
 
 		death++;
 
+		binder_inner_proc_lock(ref->proc);
 		if (list_empty(&ref->death->work.entry)) {
 			ref->death->work.type = BINDER_WORK_DEAD_BINDER;
-			list_add_tail(&ref->death->work.entry,
-				      &ref->proc->todo);
+			binder_enqueue_work_ilocked(&ref->death->work,
+						    &ref->proc->todo);
 			wake_up_interruptible(&ref->proc->wait);
 		} else
 			BUG();
+		binder_inner_proc_unlock(ref->proc);
 	}
 
 	binder_debug(BINDER_DEBUG_DEAD_BINDER,
@@ -4183,8 +4359,8 @@ static void binder_deferred_release(struct binder_proc *proc)
 		binder_free_ref(ref);
 	}
 
-	binder_release_work(&proc->todo);
-	binder_release_work(&proc->delivered_death);
+	binder_release_work(proc, &proc->todo);
+	binder_release_work(proc, &proc->delivered_death);
 
 	binder_debug(BINDER_DEBUG_OPEN_CLOSE,
 		     "%s: %d threads %d, nodes %d (ref %d), refs %d, active transactions %d\n",
@@ -4275,9 +4451,9 @@ static void print_binder_transaction(struct seq_file *m, const char *prefix,
 		   t->buffer->data);
 }
 
-static void print_binder_work(struct seq_file *m, const char *prefix,
-			      const char *transaction_prefix,
-			      struct binder_work *w)
+static void print_binder_work_ilocked(struct seq_file *m, const char *prefix,
+				      const char *transaction_prefix,
+				      struct binder_work *w)
 {
 	struct binder_node *node;
 	struct binder_transaction *t;
@@ -4318,15 +4494,16 @@ static void print_binder_work(struct seq_file *m, const char *prefix,
 	}
 }
 
-static void print_binder_thread(struct seq_file *m,
-				struct binder_thread *thread,
-				int print_always)
+static void print_binder_thread_ilocked(struct seq_file *m,
+					struct binder_thread *thread,
+					int print_always)
 {
 	struct binder_transaction *t;
 	struct binder_work *w;
 	size_t start_pos = m->count;
 	size_t header_pos;
 
+	WARN_ON(!spin_is_locked(&thread->proc->inner_lock));
 	seq_printf(m, "  thread %d: l %02x need_return %d tr %d\n",
 			thread->pid, thread->looper,
 			thread->looper_need_return,
@@ -4348,7 +4525,8 @@ static void print_binder_thread(struct seq_file *m,
 		}
 	}
 	list_for_each_entry(w, &thread->todo, entry) {
-		print_binder_work(m, "    ", "    pending transaction", w);
+		print_binder_work_ilocked(m, "    ",
+					  "    pending transaction", w);
 	}
 	if (!print_always && m->count == header_pos)
 		m->count = start_pos;
@@ -4375,9 +4553,13 @@ static void print_binder_node(struct seq_file *m, struct binder_node *node)
 			seq_printf(m, " %d", ref->proc->pid);
 	}
 	seq_puts(m, "\n");
-	list_for_each_entry(w, &node->async_todo, entry)
-		print_binder_work(m, "    ",
-				  "    pending async transaction", w);
+	if (node->proc) {
+		binder_inner_proc_lock(node->proc);
+		list_for_each_entry(w, &node->async_todo, entry)
+			print_binder_work_ilocked(m, "    ",
+					  "    pending async transaction", w);
+		binder_inner_proc_unlock(node->proc);
+	}
 }
 
 static void print_binder_ref(struct seq_file *m, struct binder_ref *ref)
@@ -4401,9 +4583,11 @@ static void print_binder_proc(struct seq_file *m,
 	seq_printf(m, "context %s\n", proc->context->name);
 	header_pos = m->count;
 
+	binder_inner_proc_lock(proc);
 	for (n = rb_first(&proc->threads); n != NULL; n = rb_next(n))
-		print_binder_thread(m, rb_entry(n, struct binder_thread,
+		print_binder_thread_ilocked(m, rb_entry(n, struct binder_thread,
 						rb_node), print_all);
+	binder_inner_proc_unlock(proc);
 	for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n)) {
 		struct binder_node *node = rb_entry(n, struct binder_node,
 						    rb_node);
@@ -4418,12 +4602,14 @@ static void print_binder_proc(struct seq_file *m,
 						     rb_node_desc));
 	}
 	binder_alloc_print_allocated(m, &proc->alloc);
+	binder_inner_proc_lock(proc);
 	list_for_each_entry(w, &proc->todo, entry)
-		print_binder_work(m, "  ", "  pending transaction", w);
+		print_binder_work_ilocked(m, "  ", "  pending transaction", w);
 	list_for_each_entry(w, &proc->delivered_death, entry) {
 		seq_puts(m, "  has delivered dead binder\n");
 		break;
 	}
+	binder_inner_proc_unlock(proc);
 	if (!print_all && m->count == header_pos)
 		m->count = start_pos;
 }
@@ -4562,15 +4748,12 @@ static void print_binder_proc_stats(struct seq_file *m,
 	seq_printf(m, "  buffers: %d\n", count);
 
 	count = 0;
+	binder_inner_proc_lock(proc);
 	list_for_each_entry(w, &proc->todo, entry) {
-		switch (w->type) {
-		case BINDER_WORK_TRANSACTION:
+		if (w->type == BINDER_WORK_TRANSACTION)
 			count++;
-			break;
-		default:
-			break;
-		}
 	}
+	binder_inner_proc_unlock(proc);
 	seq_printf(m, "  pending transactions: %d\n", count);
 
 	print_binder_stats(m, "  ", &proc->stats);
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 29/37] binder: add spinlock to protect binder_node
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (27 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 28/37] binder: add spinlocks to protect todo lists Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 30/37] binder: protect proc->nodes with inner lock Todd Kjos
                   ` (8 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

node->node_lock is used to protect elements of node. No
need to acquire for fields that are invariant: debug_id,
ptr, cookie.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 220 +++++++++++++++++++++++++++++++++++------------
 1 file changed, 165 insertions(+), 55 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 5a0389767843..5654187555be 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -318,6 +318,7 @@ struct binder_error {
  * @proc:                 binder_proc that owns this node
  *                        (invariant after initialized)
  * @refs:                 list of references on this node
+ *                        (protected by @lock)
  * @internal_strong_refs: used to take strong references when
  *                        initiating a transaction
  *                        (protected by @proc->inner_lock if @proc
@@ -351,6 +352,7 @@ struct binder_error {
  *                        (protected by @proc->inner_lock if @proc
  *                        and by @lock)
  * @has_async_transaction: async transaction to node in progress
+ *                        (protected by @lock)
  * @accept_fds:           file descriptor operations supported for node
  *                        (invariant after initialized)
  * @min_priority:         minimum scheduling priority
@@ -432,6 +434,7 @@ struct binder_ref_data {
  * @rb_node_desc: node for lookup by @data.desc in proc's rb_tree
  * @rb_node_node: node for lookup by @node in proc's rb_tree
  * @node_entry:  list entry for node->refs list in target node
+ *               (protected by @node->lock)
  * @proc:        binder_proc containing ref
  * @node:        binder_node of target node. When cleaning up a
  *               ref for deletion in binder_cleanup_ref, a non-NULL
@@ -707,6 +710,43 @@ _binder_node_unlock(struct binder_node *node, int line)
 	spin_unlock(&node->lock);
 }
 
+/**
+ * binder_node_inner_lock() - Acquire node and inner locks
+ * @node:         struct binder_node to acquire
+ *
+ * Acquires node->lock. If node->proc also acquires
+ * proc->inner_lock. Used to protect binder_node fields
+ */
+#define binder_node_inner_lock(node) _binder_node_inner_lock(node, __LINE__)
+static void
+_binder_node_inner_lock(struct binder_node *node, int line)
+{
+	binder_debug(BINDER_DEBUG_SPINLOCKS,
+		     "%s: line=%d\n", __func__, line);
+	spin_lock(&node->lock);
+	if (node->proc)
+		binder_inner_proc_lock(node->proc);
+}
+
+/**
+ * binder_node_unlock() - Release node and inner locks
+ * @node:         struct binder_node to acquire
+ *
+ * Release lock acquired via binder_node_lock()
+ */
+#define binder_node_inner_unlock(node) _binder_node_inner_unlock(node, __LINE__)
+static void
+_binder_node_inner_unlock(struct binder_node *node, int line)
+{
+	struct binder_proc *proc = node->proc;
+
+	binder_debug(BINDER_DEBUG_SPINLOCKS,
+		     "%s: line=%d\n", __func__, line);
+	if (proc)
+		binder_inner_proc_unlock(proc);
+	spin_unlock(&node->lock);
+}
+
 static bool binder_worklist_empty_ilocked(struct list_head *list)
 {
 	return list_empty(list);
@@ -925,12 +965,14 @@ static struct binder_node *binder_get_node(struct binder_proc *proc,
 }
 
 static struct binder_node *binder_new_node(struct binder_proc *proc,
-					   binder_uintptr_t ptr,
-					   binder_uintptr_t cookie)
+					   struct flat_binder_object *fp)
 {
 	struct rb_node **p = &proc->nodes.rb_node;
 	struct rb_node *parent = NULL;
 	struct binder_node *node;
+	binder_uintptr_t ptr = fp ? fp->binder : 0;
+	binder_uintptr_t cookie = fp ? fp->cookie : 0;
+	__u32 flags = fp ? fp->flags : 0;
 
 	while (*p) {
 		parent = *p;
@@ -956,6 +998,8 @@ static struct binder_node *binder_new_node(struct binder_proc *proc,
 	node->ptr = ptr;
 	node->cookie = cookie;
 	node->work.type = BINDER_WORK_NODE;
+	node->min_priority = flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
+	node->accept_fds = !!(flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
 	spin_lock_init(&node->lock);
 	INIT_LIST_HEAD(&node->work.entry);
 	INIT_LIST_HEAD(&node->async_todo);
@@ -972,12 +1016,15 @@ static void binder_free_node(struct binder_node *node)
 	binder_stats_deleted(BINDER_STAT_NODE);
 }
 
-static int binder_inc_node_ilocked(struct binder_node *node, int strong,
-				   int internal,
-				   struct list_head *target_list)
+static int binder_inc_node_nilocked(struct binder_node *node, int strong,
+				    int internal,
+				    struct list_head *target_list)
 {
-	if (node->proc)
-		BUG_ON(!spin_is_locked(&node->proc->inner_lock));
+	struct binder_proc *proc = node->proc;
+
+	BUG_ON(!spin_is_locked(&node->lock));
+	if (proc)
+		BUG_ON(!spin_is_locked(&proc->inner_lock));
 	if (strong) {
 		if (internal) {
 			if (target_list == NULL &&
@@ -1016,20 +1063,19 @@ static int binder_inc_node(struct binder_node *node, int strong, int internal,
 {
 	int ret;
 
-	if (node->proc)
-		binder_inner_proc_lock(node->proc);
-	ret = binder_inc_node_ilocked(node, strong, internal, target_list);
-	if (node->proc)
-		binder_inner_proc_unlock(node->proc);
+	binder_node_inner_lock(node);
+	ret = binder_inc_node_nilocked(node, strong, internal, target_list);
+	binder_node_inner_unlock(node);
 
 	return ret;
 }
 
-static bool binder_dec_node_ilocked(struct binder_node *node,
-				    int strong, int internal)
+static bool binder_dec_node_nilocked(struct binder_node *node,
+				     int strong, int internal)
 {
 	struct binder_proc *proc = node->proc;
 
+	BUG_ON(!spin_is_locked(&node->lock));
 	if (proc)
 		BUG_ON(!spin_is_locked(&proc->inner_lock));
 	if (strong) {
@@ -1088,12 +1134,9 @@ static void binder_dec_node(struct binder_node *node, int strong, int internal)
 {
 	bool free_node;
 
-	if (node->proc)
-		binder_inner_proc_lock(node->proc);
-	free_node = binder_dec_node_ilocked(node, strong, internal);
-	if (node->proc)
-		binder_inner_proc_unlock(node->proc);
-
+	binder_node_inner_lock(node);
+	free_node = binder_dec_node_nilocked(node, strong, internal);
+	binder_node_inner_unlock(node);
 	if (free_node)
 		binder_free_node(node);
 }
@@ -1123,6 +1166,7 @@ static void binder_inc_node_tmpref_ilocked(struct binder_node *node)
  */
 static void binder_inc_node_tmpref(struct binder_node *node)
 {
+	binder_node_lock(node);
 	if (node->proc)
 		binder_inner_proc_lock(node->proc);
 	else
@@ -1132,6 +1176,7 @@ static void binder_inc_node_tmpref(struct binder_node *node)
 		binder_inner_proc_unlock(node->proc);
 	else
 		spin_unlock(&binder_dead_nodes_lock);
+	binder_node_unlock(node);
 }
 
 /**
@@ -1144,9 +1189,8 @@ static void binder_dec_node_tmpref(struct binder_node *node)
 {
 	bool free_node;
 
-	if (node->proc)
-		binder_inner_proc_lock(node->proc);
-	else
+	binder_node_inner_lock(node);
+	if (!node->proc)
 		spin_lock(&binder_dead_nodes_lock);
 	node->tmp_refs--;
 	BUG_ON(node->tmp_refs < 0);
@@ -1158,9 +1202,8 @@ static void binder_dec_node_tmpref(struct binder_node *node)
 	 * causes no actual reference to be released in binder_dec_node().
 	 * If that changes, a change is needed here too.
 	 */
-	free_node = binder_dec_node_ilocked(node, 0, 1);
-	if (node->proc)
-		binder_inner_proc_unlock(node->proc);
+	free_node = binder_dec_node_nilocked(node, 0, 1);
+	binder_node_inner_unlock(node);
 	if (free_node)
 		binder_free_node(node);
 }
@@ -1264,19 +1307,21 @@ static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
 	}
 	rb_link_node(&new_ref->rb_node_desc, parent, p);
 	rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc);
+
+	binder_node_lock(node);
 	hlist_add_head(&new_ref->node_entry, &node->refs);
 
 	binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 		     "%d new ref %d desc %d for node %d\n",
 		      proc->pid, new_ref->data.debug_id, new_ref->data.desc,
 		      node->debug_id);
+	binder_node_unlock(node);
 	return new_ref;
 }
 
 static void binder_cleanup_ref(struct binder_ref *ref)
 {
 	bool delete_node = false;
-	struct binder_proc *node_proc = ref->node->proc;
 
 	binder_debug(BINDER_DEBUG_INTERNAL_REFS,
 		     "%d delete ref %d desc %d for node %d\n",
@@ -1286,15 +1331,13 @@ static void binder_cleanup_ref(struct binder_ref *ref)
 	rb_erase(&ref->rb_node_desc, &ref->proc->refs_by_desc);
 	rb_erase(&ref->rb_node_node, &ref->proc->refs_by_node);
 
-	if (node_proc)
-		binder_inner_proc_lock(node_proc);
+	binder_node_inner_lock(ref->node);
 	if (ref->data.strong)
-		binder_dec_node_ilocked(ref->node, 1, 1);
+		binder_dec_node_nilocked(ref->node, 1, 1);
 
 	hlist_del(&ref->node_entry);
-	delete_node = binder_dec_node_ilocked(ref->node, 0, 1);
-	if (node_proc)
-		binder_inner_proc_unlock(node_proc);
+	delete_node = binder_dec_node_nilocked(ref->node, 0, 1);
+	binder_node_inner_unlock(ref->node);
 	/*
 	 * Clear ref->node unless we want the caller to free the node
 	 */
@@ -1989,12 +2032,9 @@ static int binder_translate_binder(struct flat_binder_object *fp,
 
 	node = binder_get_node(proc, fp->binder);
 	if (!node) {
-		node = binder_new_node(proc, fp->binder, fp->cookie);
+		node = binder_new_node(proc, fp);
 		if (!node)
 			return -ENOMEM;
-
-		node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
-		node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
 	}
 	if (fp->cookie != node->cookie) {
 		binder_user_error("%d:%d sending u%016llx node %d, cookie mismatch %016llx != %016llx\n",
@@ -2055,6 +2095,7 @@ static int binder_translate_handle(struct flat_binder_object *fp,
 		goto done;
 	}
 
+	binder_node_lock(node);
 	if (node->proc == target_proc) {
 		if (fp->hdr.type == BINDER_TYPE_HANDLE)
 			fp->hdr.type = BINDER_TYPE_BINDER;
@@ -2062,18 +2103,24 @@ static int binder_translate_handle(struct flat_binder_object *fp,
 			fp->hdr.type = BINDER_TYPE_WEAK_BINDER;
 		fp->binder = node->ptr;
 		fp->cookie = node->cookie;
-		binder_inc_node(node,
-				fp->hdr.type == BINDER_TYPE_BINDER,
-				0, NULL);
+		if (node->proc)
+			binder_inner_proc_lock(node->proc);
+		binder_inc_node_nilocked(node,
+					 fp->hdr.type == BINDER_TYPE_BINDER,
+					 0, NULL);
+		if (node->proc)
+			binder_inner_proc_unlock(node->proc);
 		trace_binder_transaction_ref_to_node(t, node, &src_rdata);
 		binder_debug(BINDER_DEBUG_TRANSACTION,
 			     "        ref %d desc %d -> node %d u%016llx\n",
 			     src_rdata.debug_id, src_rdata.desc, node->debug_id,
 			     (u64)node->ptr);
+		binder_node_unlock(node);
 	} else {
 		int ret;
 		struct binder_ref_data dest_rdata;
 
+		binder_node_unlock(node);
 		ret = binder_inc_ref_for_node(target_proc, node,
 				fp->hdr.type == BINDER_TYPE_HANDLE,
 				NULL, &dest_rdata);
@@ -2381,13 +2428,16 @@ static void binder_transaction(struct binder_proc *proc,
 			mutex_unlock(&context->context_mgr_node_lock);
 		}
 		e->to_node = target_node->debug_id;
+		binder_node_lock(target_node);
 		target_proc = target_node->proc;
 		if (target_proc == NULL) {
+			binder_node_unlock(target_node);
 			return_error = BR_DEAD_REPLY;
 			return_error_line = __LINE__;
 			goto err_dead_binder;
 		}
 		target_proc->tmp_ref++;
+		binder_node_unlock(target_node);
 		if (security_binder_transaction(proc->tsk,
 						target_proc->tsk) < 0) {
 			return_error = BR_FAILED_REPLY;
@@ -2704,6 +2754,7 @@ static void binder_transaction(struct binder_proc *proc,
 	}
 	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
 	binder_enqueue_work(proc, tcomplete, &thread->todo);
+	t->work.type = BINDER_WORK_TRANSACTION;
 
 	if (reply) {
 		if (target_thread->is_dead)
@@ -2711,6 +2762,7 @@ static void binder_transaction(struct binder_proc *proc,
 		BUG_ON(t->buffer->async_transaction != 0);
 		binder_pop_transaction(target_thread, in_reply_to);
 		binder_free_transaction(in_reply_to);
+		binder_enqueue_work(target_proc, &t->work, target_list);
 	} else if (!(t->flags & TF_ONE_WAY)) {
 		BUG_ON(t->buffer->async_transaction != 0);
 		t->need_reply = 1;
@@ -2721,20 +2773,29 @@ static void binder_transaction(struct binder_proc *proc,
 			binder_pop_transaction(thread, t);
 			goto err_dead_proc_or_thread;
 		}
+		binder_enqueue_work(target_proc, &t->work, target_list);
 	} else {
 		BUG_ON(target_node == NULL);
 		BUG_ON(t->buffer->async_transaction != 1);
+		binder_node_lock(target_node);
 		if (target_node->has_async_transaction) {
 			target_list = &target_node->async_todo;
 			target_wait = NULL;
 		} else
 			target_node->has_async_transaction = 1;
+		/*
+		 * Test/set of has_async_transaction
+		 * must be atomic with enqueue on
+		 * async_todo
+		 */
 		if (target_proc->is_dead ||
-				(target_thread && target_thread->is_dead))
+				(target_thread && target_thread->is_dead)) {
+			binder_node_unlock(target_node);
 			goto err_dead_proc_or_thread;
+		}
+		binder_enqueue_work(target_proc, &t->work, target_list);
+		binder_node_unlock(target_node);
 	}
-	t->work.type = BINDER_WORK_TRANSACTION;
-	binder_enqueue_work(target_proc, &t->work, target_list);
 	if (target_wait) {
 		if (reply || !(tr->flags & TF_ONE_WAY))
 			wake_up_interruptible_sync(target_wait);
@@ -2913,6 +2974,7 @@ static int binder_thread_write(struct binder_proc *proc,
 			binder_uintptr_t node_ptr;
 			binder_uintptr_t cookie;
 			struct binder_node *node;
+			bool free_node;
 
 			if (get_user(node_ptr, (binder_uintptr_t __user *)ptr))
 				return -EFAULT;
@@ -2940,13 +3002,13 @@ static int binder_thread_write(struct binder_proc *proc,
 				binder_put_node(node);
 				break;
 			}
-			binder_inner_proc_lock(proc);
+			binder_node_inner_lock(node);
 			if (cmd == BC_ACQUIRE_DONE) {
 				if (node->pending_strong_ref == 0) {
 					binder_user_error("%d:%d BC_ACQUIRE_DONE node %d has no pending acquire request\n",
 						proc->pid, thread->pid,
 						node->debug_id);
-					binder_inner_proc_unlock(proc);
+					binder_node_inner_unlock(node);
 					binder_put_node(node);
 					break;
 				}
@@ -2956,20 +3018,22 @@ static int binder_thread_write(struct binder_proc *proc,
 					binder_user_error("%d:%d BC_INCREFS_DONE node %d has no pending increfs request\n",
 						proc->pid, thread->pid,
 						node->debug_id);
-					binder_inner_proc_unlock(proc);
+					binder_node_inner_unlock(node);
 					binder_put_node(node);
 					break;
 				}
 				node->pending_weak_ref = 0;
 			}
-			binder_inner_proc_unlock(proc);
-			binder_dec_node(node, cmd == BC_ACQUIRE_DONE, 0);
+			free_node = binder_dec_node_nilocked(node,
+					cmd == BC_ACQUIRE_DONE, 0);
+			WARN_ON(free_node);
 			binder_debug(BINDER_DEBUG_USER_REFS,
 				     "%d:%d %s node %d ls %d lw %d tr %d\n",
 				     proc->pid, thread->pid,
 				     cmd == BC_INCREFS_DONE ? "BC_INCREFS_DONE" : "BC_ACQUIRE_DONE",
 				     node->debug_id, node->local_strong_refs,
 				     node->local_weak_refs, node->tmp_refs);
+			binder_node_inner_unlock(node);
 			binder_put_node(node);
 			break;
 		}
@@ -3015,9 +3079,9 @@ static int binder_thread_write(struct binder_proc *proc,
 				struct binder_work *w;
 
 				buf_node = buffer->target_node;
+				binder_node_inner_lock(buf_node);
 				BUG_ON(!buf_node->has_async_transaction);
 				BUG_ON(buf_node->proc != proc);
-				binder_inner_proc_lock(proc);
 				w = binder_dequeue_work_head_ilocked(
 						&buf_node->async_todo);
 				if (!w)
@@ -3025,7 +3089,7 @@ static int binder_thread_write(struct binder_proc *proc,
 				else
 					binder_enqueue_work_ilocked(
 							w, &thread->todo);
-				binder_inner_proc_unlock(proc);
+				binder_node_inner_unlock(buf_node);
 			}
 			trace_binder_transaction_buffer_release(buffer);
 			binder_transaction_buffer_release(proc, buffer, NULL);
@@ -3150,6 +3214,7 @@ static int binder_thread_write(struct binder_proc *proc,
 				INIT_LIST_HEAD(&death->work.entry);
 				death->cookie = cookie;
 				ref->death = death;
+				binder_node_lock(ref->node);
 				if (ref->node->proc == NULL) {
 					ref->death->work.type = BINDER_WORK_DEAD_BINDER;
 					if (thread->looper &
@@ -3168,10 +3233,13 @@ static int binder_thread_write(struct binder_proc *proc,
 								&proc->wait);
 					}
 				}
+				binder_node_unlock(ref->node);
 			} else {
+				binder_node_lock(ref->node);
 				if (ref->death == NULL) {
 					binder_user_error("%d:%d BC_CLEAR_DEATH_NOTIFICATION death notification not active\n",
 						proc->pid, thread->pid);
+					binder_node_unlock(ref->node);
 					break;
 				}
 				death = ref->death;
@@ -3180,6 +3248,7 @@ static int binder_thread_write(struct binder_proc *proc,
 						proc->pid, thread->pid,
 						(u64)death->cookie,
 						(u64)cookie);
+					binder_node_unlock(ref->node);
 					break;
 				}
 				ref->death = NULL;
@@ -3204,6 +3273,7 @@ static int binder_thread_write(struct binder_proc *proc,
 					death->work.type = BINDER_WORK_DEAD_BINDER_AND_CLEAR;
 				}
 				binder_inner_proc_unlock(proc);
+				binder_node_unlock(ref->node);
 			}
 		} break;
 		case BC_DEAD_BINDER_DONE: {
@@ -3486,6 +3556,17 @@ static int binder_thread_read(struct binder_proc *proc,
 					     (u64)node_cookie);
 				rb_erase(&node->rb_node, &proc->nodes);
 				binder_inner_proc_unlock(proc);
+				binder_node_lock(node);
+				/*
+				 * Acquire the node lock before freeing the
+				 * node to serialize with other threads that
+				 * may have been holding the node lock while
+				 * decrementing this node (avoids race where
+				 * this thread frees while the other thread
+				 * is unlocking the node after the final
+				 * decrement)
+				 */
+				binder_node_unlock(node);
 				binder_free_node(node);
 			} else
 				binder_inner_proc_unlock(proc);
@@ -3973,16 +4054,18 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp)
 	} else {
 		context->binder_context_mgr_uid = curr_euid;
 	}
-	new_node = binder_new_node(proc, 0, 0);
+	new_node = binder_new_node(proc, NULL);
 	if (!new_node) {
 		ret = -ENOMEM;
 		goto out;
 	}
+	binder_node_lock(new_node);
 	new_node->local_weak_refs++;
 	new_node->local_strong_refs++;
 	new_node->has_strong_ref = 1;
 	new_node->has_weak_ref = 1;
 	context->binder_context_mgr_node = new_node;
+	binder_node_unlock(new_node);
 	binder_put_node(new_node);
 out:
 	mutex_unlock(&context->context_mgr_node_lock);
@@ -4245,6 +4328,7 @@ static int binder_node_release(struct binder_node *node, int refs)
 
 	binder_release_work(proc, &node->async_todo);
 
+	binder_node_lock(node);
 	binder_inner_proc_lock(proc);
 	binder_dequeue_work_ilocked(&node->work);
 	/*
@@ -4253,6 +4337,7 @@ static int binder_node_release(struct binder_node *node, int refs)
 	BUG_ON(!node->tmp_refs);
 	if (hlist_empty(&node->refs) && node->tmp_refs == 1) {
 		binder_inner_proc_unlock(proc);
+		binder_node_unlock(node);
 		binder_free_node(node);
 
 		return refs;
@@ -4289,6 +4374,7 @@ static int binder_node_release(struct binder_node *node, int refs)
 	binder_debug(BINDER_DEBUG_DEAD_BINDER,
 		     "node %d now dead, refs %d, death %d\n",
 		     node->debug_id, refs, death);
+	binder_node_unlock(node);
 	binder_put_node(node);
 
 	return refs;
@@ -4532,12 +4618,15 @@ static void print_binder_thread_ilocked(struct seq_file *m,
 		m->count = start_pos;
 }
 
-static void print_binder_node(struct seq_file *m, struct binder_node *node)
+static void print_binder_node_nlocked(struct seq_file *m,
+				      struct binder_node *node)
 {
 	struct binder_ref *ref;
 	struct binder_work *w;
 	int count;
 
+	WARN_ON(!spin_is_locked(&node->lock));
+
 	count = 0;
 	hlist_for_each_entry(ref, &node->refs, node_entry)
 		count++;
@@ -4564,11 +4653,13 @@ static void print_binder_node(struct seq_file *m, struct binder_node *node)
 
 static void print_binder_ref(struct seq_file *m, struct binder_ref *ref)
 {
+	binder_node_lock(ref->node);
 	seq_printf(m, "  ref %d: desc %d %snode %d s %d w %d d %pK\n",
 		   ref->data.debug_id, ref->data.desc,
 		   ref->node->proc ? "" : "dead ",
 		   ref->node->debug_id, ref->data.strong,
 		   ref->data.weak, ref->death);
+	binder_node_unlock(ref->node);
 }
 
 static void print_binder_proc(struct seq_file *m,
@@ -4591,8 +4682,10 @@ static void print_binder_proc(struct seq_file *m,
 	for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n)) {
 		struct binder_node *node = rb_entry(n, struct binder_node,
 						    rb_node);
+		binder_node_lock(node);
 		if (print_all || node->has_async_transaction)
-			print_binder_node(m, node);
+			print_binder_node_nlocked(m, node);
+		binder_node_unlock(node);
 	}
 	if (print_all) {
 		for (n = rb_first(&proc->refs_by_desc);
@@ -4764,6 +4857,7 @@ static int binder_state_show(struct seq_file *m, void *unused)
 {
 	struct binder_proc *proc;
 	struct binder_node *node;
+	struct binder_node *last_node = NULL;
 
 	binder_lock(__func__);
 
@@ -4772,9 +4866,25 @@ static int binder_state_show(struct seq_file *m, void *unused)
 	spin_lock(&binder_dead_nodes_lock);
 	if (!hlist_empty(&binder_dead_nodes))
 		seq_puts(m, "dead nodes:\n");
-	hlist_for_each_entry(node, &binder_dead_nodes, dead_node)
-		print_binder_node(m, node);
+	hlist_for_each_entry(node, &binder_dead_nodes, dead_node) {
+		/*
+		 * take a temporary reference on the node so it
+		 * survives and isn't removed from the list
+		 * while we print it.
+		 */
+		node->tmp_refs++;
+		spin_unlock(&binder_dead_nodes_lock);
+		if (last_node)
+			binder_put_node(last_node);
+		binder_node_lock(node);
+		print_binder_node_nlocked(m, node);
+		binder_node_unlock(node);
+		last_node = node;
+		spin_lock(&binder_dead_nodes_lock);
+	}
 	spin_unlock(&binder_dead_nodes_lock);
+	if (last_node)
+		binder_put_node(last_node);
 
 	mutex_lock(&binder_procs_lock);
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 30/37] binder: protect proc->nodes with inner lock
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (28 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 29/37] binder: add spinlock to protect binder_node Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 31/37] binder: protect proc->threads with inner_lock Todd Kjos
                   ` (7 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

When locks for binder_ref handling are added, proc->nodes
will need to be modified while holding the outer lock

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 112 +++++++++++++++++++++++++++++++++++++----------
 1 file changed, 89 insertions(+), 23 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 5654187555be..4d08b5141b01 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -313,6 +313,7 @@ struct binder_error {
  * @work:                 worklist element for node work
  *                        (protected by @proc->inner_lock)
  * @rb_node:              element for proc->nodes tree
+ *                        (protected by @proc->inner_lock)
  * @dead_node:            element for binder_dead_nodes list
  *                        (protected by binder_dead_nodes_lock)
  * @proc:                 binder_proc that owns this node
@@ -470,6 +471,7 @@ enum binder_deferred_state {
  * @threads:              rbtree of binder_threads in this proc
  * @nodes:                rbtree of binder nodes associated with
  *                        this proc ordered by node->ptr
+ *                        (protected by @inner_lock)
  * @refs_by_desc:         rbtree of refs ordered by ref->desc
  * @refs_by_node:         rbtree of refs ordered by ref->node
  * @pid                   PID of group_leader of process
@@ -856,7 +858,7 @@ static void
 binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
 static void binder_free_thread(struct binder_thread *thread);
 static void binder_free_proc(struct binder_proc *proc);
-static void binder_inc_node_tmpref(struct binder_node *node);
+static void binder_inc_node_tmpref_ilocked(struct binder_node *node);
 
 static int task_get_unused_fd_flags(struct binder_proc *proc, int flags)
 {
@@ -938,12 +940,14 @@ static void binder_set_nice(long nice)
 	binder_user_error("%d RLIMIT_NICE not set\n", current->pid);
 }
 
-static struct binder_node *binder_get_node(struct binder_proc *proc,
-					   binder_uintptr_t ptr)
+static struct binder_node *binder_get_node_ilocked(struct binder_proc *proc,
+						   binder_uintptr_t ptr)
 {
 	struct rb_node *n = proc->nodes.rb_node;
 	struct binder_node *node;
 
+	BUG_ON(!spin_is_locked(&proc->inner_lock));
+
 	while (n) {
 		node = rb_entry(n, struct binder_node, rb_node);
 
@@ -957,15 +961,28 @@ static struct binder_node *binder_get_node(struct binder_proc *proc,
 			 * to ensure node stays alive until
 			 * call to binder_put_node()
 			 */
-			binder_inc_node_tmpref(node);
+			binder_inc_node_tmpref_ilocked(node);
 			return node;
 		}
 	}
 	return NULL;
 }
 
-static struct binder_node *binder_new_node(struct binder_proc *proc,
-					   struct flat_binder_object *fp)
+static struct binder_node *binder_get_node(struct binder_proc *proc,
+					   binder_uintptr_t ptr)
+{
+	struct binder_node *node;
+
+	binder_inner_proc_lock(proc);
+	node = binder_get_node_ilocked(proc, ptr);
+	binder_inner_proc_unlock(proc);
+	return node;
+}
+
+static struct binder_node *binder_init_node_ilocked(
+						struct binder_proc *proc,
+						struct binder_node *new_node,
+						struct flat_binder_object *fp)
 {
 	struct rb_node **p = &proc->nodes.rb_node;
 	struct rb_node *parent = NULL;
@@ -974,7 +991,9 @@ static struct binder_node *binder_new_node(struct binder_proc *proc,
 	binder_uintptr_t cookie = fp ? fp->cookie : 0;
 	__u32 flags = fp ? fp->flags : 0;
 
+	BUG_ON(!spin_is_locked(&proc->inner_lock));
 	while (*p) {
+
 		parent = *p;
 		node = rb_entry(parent, struct binder_node, rb_node);
 
@@ -982,13 +1001,17 @@ static struct binder_node *binder_new_node(struct binder_proc *proc,
 			p = &(*p)->rb_left;
 		else if (ptr > node->ptr)
 			p = &(*p)->rb_right;
-		else
-			return NULL;
+		else {
+			/*
+			 * A matching node is already in
+			 * the rb tree. Abandon the init
+			 * and return it.
+			 */
+			binder_inc_node_tmpref_ilocked(node);
+			return node;
+		}
 	}
-
-	node = kzalloc(sizeof(*node), GFP_KERNEL);
-	if (node == NULL)
-		return NULL;
+	node = new_node;
 	binder_stats_created(BINDER_STAT_NODE);
 	node->tmp_refs++;
 	rb_link_node(&node->rb_node, parent, p);
@@ -1007,6 +1030,27 @@ static struct binder_node *binder_new_node(struct binder_proc *proc,
 		     "%d:%d node %d u%016llx c%016llx created\n",
 		     proc->pid, current->pid, node->debug_id,
 		     (u64)node->ptr, (u64)node->cookie);
+
+	return node;
+}
+
+static struct binder_node *binder_new_node(struct binder_proc *proc,
+					   struct flat_binder_object *fp)
+{
+	struct binder_node *node;
+	struct binder_node *new_node = kzalloc(sizeof(*node), GFP_KERNEL);
+
+	if (!new_node)
+		return NULL;
+	binder_inner_proc_lock(proc);
+	node = binder_init_node_ilocked(proc, new_node, fp);
+	binder_inner_proc_unlock(proc);
+	if (node != new_node)
+		/*
+		 * The node was already added by another thread
+		 */
+		kfree(new_node);
+
 	return node;
 }
 
@@ -4420,6 +4464,7 @@ static void binder_deferred_release(struct binder_proc *proc)
 
 	nodes = 0;
 	incoming_refs = 0;
+	binder_inner_proc_lock(proc);
 	while ((n = rb_first(&proc->nodes))) {
 		struct binder_node *node;
 
@@ -4430,10 +4475,13 @@ static void binder_deferred_release(struct binder_proc *proc)
 		 * calling binder_node_release() which will either
 		 * kfree() the node or call binder_put_node()
 		 */
-		binder_inc_node_tmpref(node);
+		binder_inc_node_tmpref_ilocked(node);
 		rb_erase(&node->rb_node, &proc->nodes);
+		binder_inner_proc_unlock(proc);
 		incoming_refs = binder_node_release(node, incoming_refs);
+		binder_inner_proc_lock(proc);
 	}
+	binder_inner_proc_unlock(proc);
 
 	outgoing_refs = 0;
 	while ((n = rb_first(&proc->refs_by_desc))) {
@@ -4618,14 +4666,16 @@ static void print_binder_thread_ilocked(struct seq_file *m,
 		m->count = start_pos;
 }
 
-static void print_binder_node_nlocked(struct seq_file *m,
-				      struct binder_node *node)
+static void print_binder_node_nilocked(struct seq_file *m,
+				       struct binder_node *node)
 {
 	struct binder_ref *ref;
 	struct binder_work *w;
 	int count;
 
 	WARN_ON(!spin_is_locked(&node->lock));
+	if (node->proc)
+		WARN_ON(!spin_is_locked(&node->proc->inner_lock));
 
 	count = 0;
 	hlist_for_each_entry(ref, &node->refs, node_entry)
@@ -4643,11 +4693,9 @@ static void print_binder_node_nlocked(struct seq_file *m,
 	}
 	seq_puts(m, "\n");
 	if (node->proc) {
-		binder_inner_proc_lock(node->proc);
 		list_for_each_entry(w, &node->async_todo, entry)
 			print_binder_work_ilocked(m, "    ",
 					  "    pending async transaction", w);
-		binder_inner_proc_unlock(node->proc);
 	}
 }
 
@@ -4669,6 +4717,7 @@ static void print_binder_proc(struct seq_file *m,
 	struct rb_node *n;
 	size_t start_pos = m->count;
 	size_t header_pos;
+	struct binder_node *last_node = NULL;
 
 	seq_printf(m, "proc %d\n", proc->pid);
 	seq_printf(m, "context %s\n", proc->context->name);
@@ -4678,15 +4727,30 @@ static void print_binder_proc(struct seq_file *m,
 	for (n = rb_first(&proc->threads); n != NULL; n = rb_next(n))
 		print_binder_thread_ilocked(m, rb_entry(n, struct binder_thread,
 						rb_node), print_all);
-	binder_inner_proc_unlock(proc);
+
 	for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n)) {
 		struct binder_node *node = rb_entry(n, struct binder_node,
 						    rb_node);
-		binder_node_lock(node);
-		if (print_all || node->has_async_transaction)
-			print_binder_node_nlocked(m, node);
-		binder_node_unlock(node);
+		/*
+		 * take a temporary reference on the node so it
+		 * survives and isn't removed from the tree
+		 * while we print it.
+		 */
+		binder_inc_node_tmpref_ilocked(node);
+		/* Need to drop inner lock to take node lock */
+		binder_inner_proc_unlock(proc);
+		if (last_node)
+			binder_put_node(last_node);
+		binder_node_inner_lock(node);
+		print_binder_node_nilocked(m, node);
+		binder_node_inner_unlock(node);
+		last_node = node;
+		binder_inner_proc_lock(proc);
 	}
+	binder_inner_proc_unlock(proc);
+	if (last_node)
+		binder_put_node(last_node);
+
 	if (print_all) {
 		for (n = rb_first(&proc->refs_by_desc);
 		     n != NULL;
@@ -4822,8 +4886,10 @@ static void print_binder_proc_stats(struct seq_file *m,
 			proc->ready_threads,
 			binder_alloc_get_free_async_space(&proc->alloc));
 	count = 0;
+	binder_inner_proc_lock(proc);
 	for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n))
 		count++;
+	binder_inner_proc_unlock(proc);
 	seq_printf(m, "  nodes: %d\n", count);
 	count = 0;
 	strong = 0;
@@ -4877,7 +4943,7 @@ static int binder_state_show(struct seq_file *m, void *unused)
 		if (last_node)
 			binder_put_node(last_node);
 		binder_node_lock(node);
-		print_binder_node_nlocked(m, node);
+		print_binder_node_nilocked(m, node);
 		binder_node_unlock(node);
 		last_node = node;
 		spin_lock(&binder_dead_nodes_lock);
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 31/37] binder: protect proc->threads with inner_lock
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (29 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 30/37] binder: protect proc->nodes with inner lock Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 32/37] binder: protect transaction_stack with inner lock Todd Kjos
                   ` (6 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

proc->threads will need to be accessed with higher
locks of other processes held so use proc->inner_lock
to protect it. proc->tmp_ref now needs to be protected
by proc->inner_lock.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 87 +++++++++++++++++++++++++++++++++++-------------
 1 file changed, 63 insertions(+), 24 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 4d08b5141b01..5deb9453dee4 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -469,6 +469,7 @@ enum binder_deferred_state {
  * struct binder_proc - binder process bookkeeping
  * @proc_node:            element for binder_procs list
  * @threads:              rbtree of binder_threads in this proc
+ *                        (protected by @inner_lock)
  * @nodes:                rbtree of binder nodes associated with
  *                        this proc ordered by node->ptr
  *                        (protected by @inner_lock)
@@ -486,6 +487,7 @@ enum binder_deferred_state {
  *                        (protected by binder_deferred_lock)
  * @is_dead:              process is dead and awaiting free
  *                        when outstanding transactions are cleaned up
+ *                        (protected by @inner_lock)
  * @todo:                 list of work for this process
  *                        (protected by @inner_lock)
  * @wait:                 wait queue head to wait for proc work
@@ -501,6 +503,7 @@ enum binder_deferred_state {
  * @requested_threads_started: number binder threads started
  * @ready_threads:        number of threads waiting for proc work
  * @tmp_ref:              temporary reference to indicate proc is in use
+ *                        (protected by @inner_lock)
  * @default_priority:     default scheduler priority
  *                        (invariant after initialized)
  * @debugfs_entry:        debugfs node
@@ -556,6 +559,7 @@ enum {
  * @proc:                 binder process for this thread
  *                        (invariant after initialization)
  * @rb_node:              element for proc->threads rbtree
+ *                        (protected by @proc->inner_lock)
  * @pid:                  PID for this thread
  *                        (invariant after initialization)
  * @looper:               bitmap of looping state
@@ -576,6 +580,7 @@ enum {
  *                        always be acquired)
  * @is_dead:              thread is dead and awaiting free
  *                        when outstanding transactions are cleaned up
+ *                        (protected by @proc->inner_lock)
  *
  * Bookkeeping structure for binder threads.
  */
@@ -1667,15 +1672,15 @@ static void binder_thread_dec_tmpref(struct binder_thread *thread)
 	/*
 	 * atomic is used to protect the counter value while
 	 * it cannot reach zero or thread->is_dead is false
-	 *
-	 * TODO: future patch adds locking to ensure that the
-	 * check of tmp_ref and is_dead is done with a lock held
 	 */
+	binder_inner_proc_lock(thread->proc);
 	atomic_dec(&thread->tmp_ref);
 	if (thread->is_dead && !atomic_read(&thread->tmp_ref)) {
+		binder_inner_proc_unlock(thread->proc);
 		binder_free_thread(thread);
 		return;
 	}
+	binder_inner_proc_unlock(thread->proc);
 }
 
 /**
@@ -1692,12 +1697,15 @@ static void binder_thread_dec_tmpref(struct binder_thread *thread)
  */
 static void binder_proc_dec_tmpref(struct binder_proc *proc)
 {
+	binder_inner_proc_lock(proc);
 	proc->tmp_ref--;
 	if (proc->is_dead && RB_EMPTY_ROOT(&proc->threads) &&
 			!proc->tmp_ref) {
+		binder_inner_proc_unlock(proc);
 		binder_free_proc(proc);
 		return;
 	}
+	binder_inner_proc_unlock(proc);
 }
 
 /**
@@ -2480,7 +2488,9 @@ static void binder_transaction(struct binder_proc *proc,
 			return_error_line = __LINE__;
 			goto err_dead_binder;
 		}
+		binder_inner_proc_lock(target_proc);
 		target_proc->tmp_ref++;
+		binder_inner_proc_unlock(target_proc);
 		binder_node_unlock(target_node);
 		if (security_binder_transaction(proc->tsk,
 						target_proc->tsk) < 0) {
@@ -3854,7 +3864,8 @@ static void binder_release_work(struct binder_proc *proc,
 
 }
 
-static struct binder_thread *binder_get_thread(struct binder_proc *proc)
+static struct binder_thread *binder_get_thread_ilocked(
+		struct binder_proc *proc, struct binder_thread *new_thread)
 {
 	struct binder_thread *thread = NULL;
 	struct rb_node *parent = NULL;
@@ -3869,25 +3880,45 @@ static struct binder_thread *binder_get_thread(struct binder_proc *proc)
 		else if (current->pid > thread->pid)
 			p = &(*p)->rb_right;
 		else
-			break;
+			return thread;
 	}
-	if (*p == NULL) {
-		thread = kzalloc(sizeof(*thread), GFP_KERNEL);
-		if (thread == NULL)
+	if (!new_thread)
+		return NULL;
+	thread = new_thread;
+	binder_stats_created(BINDER_STAT_THREAD);
+	thread->proc = proc;
+	thread->pid = current->pid;
+	atomic_set(&thread->tmp_ref, 0);
+	init_waitqueue_head(&thread->wait);
+	INIT_LIST_HEAD(&thread->todo);
+	rb_link_node(&thread->rb_node, parent, p);
+	rb_insert_color(&thread->rb_node, &proc->threads);
+	thread->looper_need_return = true;
+	thread->return_error.work.type = BINDER_WORK_RETURN_ERROR;
+	thread->return_error.cmd = BR_OK;
+	thread->reply_error.work.type = BINDER_WORK_RETURN_ERROR;
+	thread->reply_error.cmd = BR_OK;
+
+	return thread;
+}
+
+static struct binder_thread *binder_get_thread(struct binder_proc *proc)
+{
+	struct binder_thread *thread;
+	struct binder_thread *new_thread;
+
+	binder_inner_proc_lock(proc);
+	thread = binder_get_thread_ilocked(proc, NULL);
+	binder_inner_proc_unlock(proc);
+	if (!thread) {
+		new_thread = kzalloc(sizeof(*thread), GFP_KERNEL);
+		if (new_thread == NULL)
 			return NULL;
-		binder_stats_created(BINDER_STAT_THREAD);
-		thread->proc = proc;
-		thread->pid = current->pid;
-		atomic_set(&thread->tmp_ref, 0);
-		init_waitqueue_head(&thread->wait);
-		INIT_LIST_HEAD(&thread->todo);
-		rb_link_node(&thread->rb_node, parent, p);
-		rb_insert_color(&thread->rb_node, &proc->threads);
-		thread->looper_need_return = true;
-		thread->return_error.work.type = BINDER_WORK_RETURN_ERROR;
-		thread->return_error.cmd = BR_OK;
-		thread->reply_error.work.type = BINDER_WORK_RETURN_ERROR;
-		thread->reply_error.cmd = BR_OK;
+		binder_inner_proc_lock(proc);
+		thread = binder_get_thread_ilocked(proc, new_thread);
+		binder_inner_proc_unlock(proc);
+		if (thread != new_thread)
+			kfree(new_thread);
 	}
 	return thread;
 }
@@ -3918,6 +3949,7 @@ static int binder_thread_release(struct binder_proc *proc,
 	int active_transactions = 0;
 	struct binder_transaction *last_t = NULL;
 
+	binder_inner_proc_lock(thread->proc);
 	/*
 	 * take a ref on the proc so it survives
 	 * after we remove this thread from proc->threads.
@@ -3965,6 +3997,7 @@ static int binder_thread_release(struct binder_proc *proc,
 		if (t)
 			spin_lock(&t->lock);
 	}
+	binder_inner_proc_unlock(thread->proc);
 
 	if (send_reply)
 		binder_send_failed_reply(send_reply, BR_DEAD_REPLY);
@@ -4338,6 +4371,7 @@ static void binder_deferred_flush(struct binder_proc *proc)
 	struct rb_node *n;
 	int wake_count = 0;
 
+	binder_inner_proc_lock(proc);
 	for (n = rb_first(&proc->threads); n != NULL; n = rb_next(n)) {
 		struct binder_thread *thread = rb_entry(n, struct binder_thread, rb_node);
 
@@ -4347,6 +4381,7 @@ static void binder_deferred_flush(struct binder_proc *proc)
 			wake_count++;
 		}
 	}
+	binder_inner_proc_unlock(proc);
 	wake_up_interruptible_all(&proc->wait);
 
 	binder_debug(BINDER_DEBUG_OPEN_CLOSE,
@@ -4445,6 +4480,7 @@ static void binder_deferred_release(struct binder_proc *proc)
 		context->binder_context_mgr_node = NULL;
 	}
 	mutex_unlock(&context->context_mgr_node_lock);
+	binder_inner_proc_lock(proc);
 	/*
 	 * Make sure proc stays alive after we
 	 * remove all the threads
@@ -4458,13 +4494,14 @@ static void binder_deferred_release(struct binder_proc *proc)
 		struct binder_thread *thread;
 
 		thread = rb_entry(n, struct binder_thread, rb_node);
+		binder_inner_proc_unlock(proc);
 		threads++;
 		active_transactions += binder_thread_release(proc, thread);
+		binder_inner_proc_lock(proc);
 	}
 
 	nodes = 0;
 	incoming_refs = 0;
-	binder_inner_proc_lock(proc);
 	while ((n = rb_first(&proc->nodes))) {
 		struct binder_node *node;
 
@@ -4872,10 +4909,13 @@ static void print_binder_proc_stats(struct seq_file *m,
 	struct binder_work *w;
 	struct rb_node *n;
 	int count, strong, weak;
+	size_t free_async_space =
+		binder_alloc_get_free_async_space(&proc->alloc);
 
 	seq_printf(m, "proc %d\n", proc->pid);
 	seq_printf(m, "context %s\n", proc->context->name);
 	count = 0;
+	binder_inner_proc_lock(proc);
 	for (n = rb_first(&proc->threads); n != NULL; n = rb_next(n))
 		count++;
 	seq_printf(m, "  threads: %d\n", count);
@@ -4884,9 +4924,8 @@ static void print_binder_proc_stats(struct seq_file *m,
 			"  free async space %zd\n", proc->requested_threads,
 			proc->requested_threads_started, proc->max_threads,
 			proc->ready_threads,
-			binder_alloc_get_free_async_space(&proc->alloc));
+			free_async_space);
 	count = 0;
-	binder_inner_proc_lock(proc);
 	for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n))
 		count++;
 	binder_inner_proc_unlock(proc);
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 32/37] binder: protect transaction_stack with inner lock.
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (30 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 31/37] binder: protect proc->threads with inner_lock Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 33/37] binder: use inner lock to protect thread accounting Todd Kjos
                   ` (5 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

From: Martijn Coenen <maco@google.com>

This makes future changes to priority inheritance
easier, since we want to be able to look at a thread's
transaction stack when selecting a thread to inherit
priority for.

It also allows us to take just a single lock in a
few paths, where we used to take two in succession.

Signed-off-by: Martijn Coenen <maco@google.com>
---
 drivers/android/binder.c | 96 +++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 79 insertions(+), 17 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 5deb9453dee4..9d18ca1f7dcc 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -30,7 +30,8 @@
  * 3) proc->inner_lock : protects the thread and node lists
  *    (proc->threads, proc->nodes) and all todo lists associated
  *    with the binder_proc (proc->todo, thread->todo,
- *    proc->delivered_death and node->async_todo).
+ *    proc->delivered_death and node->async_todo), as well as
+ *    thread->transaction_stack
  *    binder_inner_proc_lock() and binder_inner_proc_unlock()
  *    are used to acq/rel
  *
@@ -567,11 +568,13 @@ enum {
  * @looper_needs_return:  looping thread needs to exit driver
  *                        (no lock needed)
  * @transaction_stack:    stack of in-progress transactions for this thread
+ *                        (protected by @proc->inner_lock)
  * @todo:                 list of work to do for this thread
  *                        (protected by @proc->inner_lock)
  * @return_error:         transaction errors reported by this thread
  *                        (only accessed by this thread)
  * @reply_error:          transaction errors reported by target thread
+ *                        (protected by @proc->inner_lock)
  * @wait:                 wait queue for thread work
  * @stats:                per-thread statistics
  *                        (atomics, no lock needed)
@@ -1644,10 +1647,11 @@ static int binder_inc_ref_for_node(struct binder_proc *proc,
 	return ret;
 }
 
-static void binder_pop_transaction(struct binder_thread *target_thread,
-				   struct binder_transaction *t)
+static void binder_pop_transaction_ilocked(struct binder_thread *target_thread,
+					   struct binder_transaction *t)
 {
 	BUG_ON(!target_thread);
+	BUG_ON(!spin_is_locked(&target_thread->proc->inner_lock));
 	BUG_ON(target_thread->transaction_stack != t);
 	BUG_ON(target_thread->transaction_stack->from != target_thread);
 	target_thread->transaction_stack =
@@ -1731,6 +1735,35 @@ static struct binder_thread *binder_get_txn_from(
 	return from;
 }
 
+/**
+ * binder_get_txn_from_and_acq_inner() - get t->from and acquire inner lock
+ * @t:	binder transaction for t->from
+ *
+ * Same as binder_get_txn_from() except it also acquires the proc->inner_lock
+ * to guarantee that the thread cannot be released while operating on it.
+ * The caller must call binder_inner_proc_unlock() to release the inner lock
+ * as well as call binder_dec_thread_txn() to release the reference.
+ *
+ * Return: the value of t->from
+ */
+static struct binder_thread *binder_get_txn_from_and_acq_inner(
+		struct binder_transaction *t)
+{
+	struct binder_thread *from;
+
+	from = binder_get_txn_from(t);
+	if (!from)
+		return NULL;
+	binder_inner_proc_lock(from->proc);
+	if (t->from) {
+		BUG_ON(from != t->from);
+		return from;
+	}
+	binder_inner_proc_unlock(from->proc);
+	binder_thread_dec_tmpref(from);
+	return NULL;
+}
+
 static void binder_free_transaction(struct binder_transaction *t)
 {
 	if (t->buffer)
@@ -1747,7 +1780,7 @@ static void binder_send_failed_reply(struct binder_transaction *t,
 
 	BUG_ON(t->flags & TF_ONE_WAY);
 	while (1) {
-		target_thread = binder_get_txn_from(t);
+		target_thread = binder_get_txn_from_and_acq_inner(t);
 		if (target_thread) {
 			binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
 				     "send failed reply for transaction %d to %d:%d\n",
@@ -1755,11 +1788,10 @@ static void binder_send_failed_reply(struct binder_transaction *t,
 				      target_thread->proc->pid,
 				      target_thread->pid);
 
-			binder_pop_transaction(target_thread, t);
+			binder_pop_transaction_ilocked(target_thread, t);
 			if (target_thread->reply_error.cmd == BR_OK) {
 				target_thread->reply_error.cmd = error_code;
-				binder_enqueue_work(
-					target_thread->proc,
+				binder_enqueue_work_ilocked(
 					&target_thread->reply_error.work,
 					&target_thread->todo);
 				wake_up_interruptible(&target_thread->wait);
@@ -1767,6 +1799,7 @@ static void binder_send_failed_reply(struct binder_transaction *t,
 				WARN(1, "Unexpected reply error: %u\n",
 						target_thread->reply_error.cmd);
 			}
+			binder_inner_proc_unlock(target_thread->proc);
 			binder_thread_dec_tmpref(target_thread);
 			binder_free_transaction(t);
 			return;
@@ -2396,8 +2429,10 @@ static void binder_transaction(struct binder_proc *proc,
 	e->context_name = proc->context->name;
 
 	if (reply) {
+		binder_inner_proc_lock(proc);
 		in_reply_to = thread->transaction_stack;
 		if (in_reply_to == NULL) {
+			binder_inner_proc_unlock(proc);
 			binder_user_error("%d:%d got reply transaction with no transaction stack\n",
 					  proc->pid, thread->pid);
 			return_error = BR_FAILED_REPLY;
@@ -2405,7 +2440,6 @@ static void binder_transaction(struct binder_proc *proc,
 			return_error_line = __LINE__;
 			goto err_empty_call_stack;
 		}
-		binder_set_nice(in_reply_to->saved_priority);
 		if (in_reply_to->to_thread != thread) {
 			spin_lock(&in_reply_to->lock);
 			binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",
@@ -2415,6 +2449,7 @@ static void binder_transaction(struct binder_proc *proc,
 				in_reply_to->to_thread ?
 				in_reply_to->to_thread->pid : 0);
 			spin_unlock(&in_reply_to->lock);
+			binder_inner_proc_unlock(proc);
 			return_error = BR_FAILED_REPLY;
 			return_error_param = -EPROTO;
 			return_error_line = __LINE__;
@@ -2422,7 +2457,9 @@ static void binder_transaction(struct binder_proc *proc,
 			goto err_bad_call_stack;
 		}
 		thread->transaction_stack = in_reply_to->to_parent;
-		target_thread = binder_get_txn_from(in_reply_to);
+		binder_inner_proc_unlock(proc);
+		binder_set_nice(in_reply_to->saved_priority);
+		target_thread = binder_get_txn_from_and_acq_inner(in_reply_to);
 		if (target_thread == NULL) {
 			return_error = BR_DEAD_REPLY;
 			return_error_line = __LINE__;
@@ -2434,6 +2471,7 @@ static void binder_transaction(struct binder_proc *proc,
 				target_thread->transaction_stack ?
 				target_thread->transaction_stack->debug_id : 0,
 				in_reply_to->debug_id);
+			binder_inner_proc_unlock(target_thread->proc);
 			return_error = BR_FAILED_REPLY;
 			return_error_param = -EPROTO;
 			return_error_line = __LINE__;
@@ -2443,6 +2481,7 @@ static void binder_transaction(struct binder_proc *proc,
 		}
 		target_proc = target_thread->proc;
 		target_proc->tmp_ref++;
+		binder_inner_proc_unlock(target_thread->proc);
 	} else {
 		if (tr->target.handle) {
 			struct binder_ref *ref;
@@ -2499,6 +2538,7 @@ static void binder_transaction(struct binder_proc *proc,
 			return_error_line = __LINE__;
 			goto err_invalid_target_handle;
 		}
+		binder_inner_proc_lock(proc);
 		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
 			struct binder_transaction *tmp;
 
@@ -2511,6 +2551,7 @@ static void binder_transaction(struct binder_proc *proc,
 					tmp->to_thread ?
 					tmp->to_thread->pid : 0);
 				spin_unlock(&tmp->lock);
+				binder_inner_proc_unlock(proc);
 				return_error = BR_FAILED_REPLY;
 				return_error_param = -EPROTO;
 				return_error_line = __LINE__;
@@ -2531,6 +2572,7 @@ static void binder_transaction(struct binder_proc *proc,
 				tmp = tmp->from_parent;
 			}
 		}
+		binder_inner_proc_unlock(proc);
 	}
 	if (target_thread) {
 		e->to_thread = target_thread->pid;
@@ -2811,23 +2853,34 @@ static void binder_transaction(struct binder_proc *proc,
 	t->work.type = BINDER_WORK_TRANSACTION;
 
 	if (reply) {
-		if (target_thread->is_dead)
+		binder_inner_proc_lock(target_proc);
+		if (target_thread->is_dead) {
+			binder_inner_proc_unlock(target_proc);
 			goto err_dead_proc_or_thread;
+		}
 		BUG_ON(t->buffer->async_transaction != 0);
-		binder_pop_transaction(target_thread, in_reply_to);
+		binder_pop_transaction_ilocked(target_thread, in_reply_to);
+		binder_enqueue_work_ilocked(&t->work, target_list);
+		binder_inner_proc_unlock(target_proc);
 		binder_free_transaction(in_reply_to);
-		binder_enqueue_work(target_proc, &t->work, target_list);
 	} else if (!(t->flags & TF_ONE_WAY)) {
 		BUG_ON(t->buffer->async_transaction != 0);
+		binder_inner_proc_lock(proc);
 		t->need_reply = 1;
 		t->from_parent = thread->transaction_stack;
 		thread->transaction_stack = t;
+		binder_inner_proc_unlock(proc);
+		binder_inner_proc_lock(target_proc);
 		if (target_proc->is_dead ||
 				(target_thread && target_thread->is_dead)) {
-			binder_pop_transaction(thread, t);
+			binder_inner_proc_unlock(target_proc);
+			binder_inner_proc_lock(proc);
+			binder_pop_transaction_ilocked(thread, t);
+			binder_inner_proc_unlock(proc);
 			goto err_dead_proc_or_thread;
 		}
-		binder_enqueue_work(target_proc, &t->work, target_list);
+		binder_enqueue_work_ilocked(&t->work, target_list);
+		binder_inner_proc_unlock(target_proc);
 	} else {
 		BUG_ON(target_node == NULL);
 		BUG_ON(t->buffer->async_transaction != 1);
@@ -2842,12 +2895,15 @@ static void binder_transaction(struct binder_proc *proc,
 		 * must be atomic with enqueue on
 		 * async_todo
 		 */
+		binder_inner_proc_lock(target_proc);
 		if (target_proc->is_dead ||
 				(target_thread && target_thread->is_dead)) {
+			binder_inner_proc_unlock(target_proc);
 			binder_node_unlock(target_node);
 			goto err_dead_proc_or_thread;
 		}
-		binder_enqueue_work(target_proc, &t->work, target_list);
+		binder_enqueue_work_ilocked(&t->work, target_list);
+		binder_inner_proc_unlock(target_proc);
 		binder_node_unlock(target_node);
 	}
 	if (target_wait) {
@@ -3464,8 +3520,10 @@ static int binder_thread_read(struct binder_proc *proc,
 	}
 
 retry:
+	binder_inner_proc_lock(proc);
 	wait_for_proc_work = thread->transaction_stack == NULL &&
-		binder_worklist_empty(proc, &thread->todo);
+		binder_worklist_empty_ilocked(&thread->todo);
+	binder_inner_proc_unlock(proc);
 
 	thread->looper |= BINDER_LOOPER_STATE_WAITING;
 	if (wait_for_proc_work)
@@ -3777,9 +3835,11 @@ static int binder_thread_read(struct binder_proc *proc,
 			binder_thread_dec_tmpref(t_from);
 		t->buffer->allow_user_free = 1;
 		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
+			binder_inner_proc_lock(thread->proc);
 			t->to_parent = thread->transaction_stack;
 			t->to_thread = thread;
 			thread->transaction_stack = t;
+			binder_inner_proc_unlock(thread->proc);
 		} else {
 			binder_free_transaction(t);
 		}
@@ -4017,8 +4077,10 @@ static unsigned int binder_poll(struct file *filp,
 
 	thread = binder_get_thread(proc);
 
+	binder_inner_proc_lock(thread->proc);
 	wait_for_proc_work = thread->transaction_stack == NULL &&
-		binder_worklist_empty(proc, &thread->todo);
+		binder_worklist_empty_ilocked(&thread->todo);
+	binder_inner_proc_unlock(thread->proc);
 
 	binder_unlock(__func__);
 
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 33/37] binder: use inner lock to protect thread accounting
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (31 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 32/37] binder: protect transaction_stack with inner lock Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 34/37] binder: protect binder_ref with outer lock Todd Kjos
                   ` (4 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Use the inner lock to protect thread accounting fields in
proc structure: max_threads, requested_threads,
requested_threads_started and ready_threads.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 28 +++++++++++++++++++++++-----
 1 file changed, 23 insertions(+), 5 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 9d18ca1f7dcc..704540ea3e12 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -498,11 +498,15 @@ enum binder_deferred_state {
  * @delivered_death:      list of delivered death notification
  *                        (protected by @inner_lock)
  * @max_threads:          cap on number of binder threads
+ *                        (protected by @inner_lock)
  * @requested_threads:    number of binder threads requested but not
  *                        yet started. In current implementation, can
  *                        only be 0 or 1.
+ *                        (protected by @inner_lock)
  * @requested_threads_started: number binder threads started
+ *                        (protected by @inner_lock)
  * @ready_threads:        number of threads waiting for proc work
+ *                        (protected by @inner_lock)
  * @tmp_ref:              temporary reference to indicate proc is in use
  *                        (protected by @inner_lock)
  * @default_priority:     default scheduler priority
@@ -3234,6 +3238,7 @@ static int binder_thread_write(struct binder_proc *proc,
 			binder_debug(BINDER_DEBUG_THREADS,
 				     "%d:%d BC_REGISTER_LOOPER\n",
 				     proc->pid, thread->pid);
+			binder_inner_proc_lock(proc);
 			if (thread->looper & BINDER_LOOPER_STATE_ENTERED) {
 				thread->looper |= BINDER_LOOPER_STATE_INVALID;
 				binder_user_error("%d:%d ERROR: BC_REGISTER_LOOPER called after BC_ENTER_LOOPER\n",
@@ -3247,6 +3252,7 @@ static int binder_thread_write(struct binder_proc *proc,
 				proc->requested_threads_started++;
 			}
 			thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
+			binder_inner_proc_unlock(proc);
 			break;
 		case BC_ENTER_LOOPER:
 			binder_debug(BINDER_DEBUG_THREADS,
@@ -3523,11 +3529,11 @@ static int binder_thread_read(struct binder_proc *proc,
 	binder_inner_proc_lock(proc);
 	wait_for_proc_work = thread->transaction_stack == NULL &&
 		binder_worklist_empty_ilocked(&thread->todo);
+	if (wait_for_proc_work)
+		proc->ready_threads++;
 	binder_inner_proc_unlock(proc);
 
 	thread->looper |= BINDER_LOOPER_STATE_WAITING;
-	if (wait_for_proc_work)
-		proc->ready_threads++;
 
 	binder_unlock(__func__);
 
@@ -3558,8 +3564,10 @@ static int binder_thread_read(struct binder_proc *proc,
 
 	binder_lock(__func__);
 
+	binder_inner_proc_lock(proc);
 	if (wait_for_proc_work)
 		proc->ready_threads--;
+	binder_inner_proc_unlock(proc);
 	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
 
 	if (ret)
@@ -3849,19 +3857,22 @@ static int binder_thread_read(struct binder_proc *proc,
 done:
 
 	*consumed = ptr - buffer;
+	binder_inner_proc_lock(proc);
 	if (proc->requested_threads + proc->ready_threads == 0 &&
 	    proc->requested_threads_started < proc->max_threads &&
 	    (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
 	     BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
 	     /*spawn a new thread if we leave this out */) {
 		proc->requested_threads++;
+		binder_inner_proc_unlock(proc);
 		binder_debug(BINDER_DEBUG_THREADS,
 			     "%d:%d BR_SPAWN_LOOPER\n",
 			     proc->pid, thread->pid);
 		if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
 			return -EFAULT;
 		binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
-	}
+	} else
+		binder_inner_proc_unlock(proc);
 	return 0;
 }
 
@@ -4241,12 +4252,19 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 		if (ret)
 			goto err;
 		break;
-	case BINDER_SET_MAX_THREADS:
-		if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {
+	case BINDER_SET_MAX_THREADS: {
+		int max_threads;
+
+		if (copy_from_user(&max_threads, ubuf,
+				   sizeof(max_threads))) {
 			ret = -EINVAL;
 			goto err;
 		}
+		binder_inner_proc_lock(proc);
+		proc->max_threads = max_threads;
+		binder_inner_proc_unlock(proc);
 		break;
+	}
 	case BINDER_SET_CONTEXT_MGR:
 		ret = binder_ioctl_set_ctx_mgr(filp);
 		if (ret)
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 34/37] binder: protect binder_ref with outer lock
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (32 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 33/37] binder: use inner lock to protect thread accounting Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 35/37] binder: protect against stale pointers in print_binder_transaction Todd Kjos
                   ` (3 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Use proc->outer_lock to protect the binder_ref structure.
The outer lock allows functions operating on the binder_ref
to do nested acquires of node and inner locks as necessary
to attach refs to nodes atomically.

Binder refs must never be accesssed without holding the
outer lock.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 133 +++++++++++++++++++++++++++++------------------
 1 file changed, 83 insertions(+), 50 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 704540ea3e12..f07f0d488aa4 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -475,7 +475,9 @@ enum binder_deferred_state {
  *                        this proc ordered by node->ptr
  *                        (protected by @inner_lock)
  * @refs_by_desc:         rbtree of refs ordered by ref->desc
+ *                        (protected by @outer_lock)
  * @refs_by_node:         rbtree of refs ordered by ref->node
+ *                        (protected by @outer_lock)
  * @pid                   PID of group_leader of process
  *                        (invariant after initialized)
  * @tsk                   task_struct for group_leader of process
@@ -1269,8 +1271,8 @@ static void binder_put_node(struct binder_node *node)
 	binder_dec_node_tmpref(node);
 }
 
-static struct binder_ref *binder_get_ref(struct binder_proc *proc,
-					 u32 desc, bool need_strong_ref)
+static struct binder_ref *binder_get_ref_olocked(struct binder_proc *proc,
+						 u32 desc, bool need_strong_ref)
 {
 	struct rb_node *n = proc->refs_by_desc.rb_node;
 	struct binder_ref *ref;
@@ -1293,7 +1295,7 @@ static struct binder_ref *binder_get_ref(struct binder_proc *proc,
 }
 
 /**
- * binder_get_ref_for_node() - get the ref associated with given node
+ * binder_get_ref_for_node_olocked() - get the ref associated with given node
  * @proc:	binder_proc that owns the ref
  * @node:	binder_node of target
  * @new_ref:	newly allocated binder_ref to be initialized or %NULL
@@ -1310,9 +1312,10 @@ static struct binder_ref *binder_get_ref(struct binder_proc *proc,
  *		new_ref. new_ref must be kfree'd by the caller in
  *		this case.
  */
-static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
-						  struct binder_node *node,
-						  struct binder_ref *new_ref)
+static struct binder_ref *binder_get_ref_for_node_olocked(
+					struct binder_proc *proc,
+					struct binder_node *node,
+					struct binder_ref *new_ref)
 {
 	struct binder_context *context = proc->context;
 	struct rb_node **p = &proc->refs_by_node.rb_node;
@@ -1375,7 +1378,7 @@ static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
 	return new_ref;
 }
 
-static void binder_cleanup_ref(struct binder_ref *ref)
+static void binder_cleanup_ref_olocked(struct binder_ref *ref)
 {
 	bool delete_node = false;
 
@@ -1418,17 +1421,17 @@ static void binder_cleanup_ref(struct binder_ref *ref)
 }
 
 /**
- * binder_inc_ref() - increment the ref for given handle
+ * binder_inc_ref_olocked() - increment the ref for given handle
  * @ref:         ref to be incremented
  * @strong:      if true, strong increment, else weak
  * @target_list: list to queue node work on
  *
- * Increment the ref.
+ * Increment the ref. @ref->proc->outer_lock must be held on entry
  *
  * Return: 0, if successful, else errno
  */
-static int binder_inc_ref(struct binder_ref *ref, int strong,
-			  struct list_head *target_list)
+static int binder_inc_ref_olocked(struct binder_ref *ref, int strong,
+				  struct list_head *target_list)
 {
 	int ret;
 
@@ -1457,12 +1460,9 @@ static int binder_inc_ref(struct binder_ref *ref, int strong,
  *
  * Decrement the ref.
  *
- * TODO: kfree is avoided here since an upcoming patch
- * will put this under a lock.
- *
  * Return: true if ref is cleaned up and ready to be freed
  */
-static bool binder_dec_ref(struct binder_ref *ref, int strong)
+static bool binder_dec_ref_olocked(struct binder_ref *ref, int strong)
 {
 	if (strong) {
 		if (ref->data.strong == 0) {
@@ -1486,13 +1486,7 @@ static bool binder_dec_ref(struct binder_ref *ref, int strong)
 		ref->data.weak--;
 	}
 	if (ref->data.strong == 0 && ref->data.weak == 0) {
-		binder_cleanup_ref(ref);
-		/*
-		 * TODO: we could kfree(ref) here, but an upcoming
-		 * patch will call this with a lock held, so we
-		 * return an indication that the ref should be
-		 * freed.
-		 */
+		binder_cleanup_ref_olocked(ref);
 		return true;
 	}
 	return false;
@@ -1517,7 +1511,8 @@ static struct binder_node *binder_get_node_from_ref(
 	struct binder_node *node;
 	struct binder_ref *ref;
 
-	ref = binder_get_ref(proc, desc, need_strong_ref);
+	binder_proc_lock(proc);
+	ref = binder_get_ref_olocked(proc, desc, need_strong_ref);
 	if (!ref)
 		goto err_no_ref;
 	node = ref->node;
@@ -1528,10 +1523,12 @@ static struct binder_node *binder_get_node_from_ref(
 	binder_inc_node_tmpref(node);
 	if (rdata)
 		*rdata = ref->data;
+	binder_proc_unlock(proc);
 
 	return node;
 
 err_no_ref:
+	binder_proc_unlock(proc);
 	return NULL;
 }
 
@@ -1571,24 +1568,27 @@ static int binder_update_ref_for_handle(struct binder_proc *proc,
 	struct binder_ref *ref;
 	bool delete_ref = false;
 
-	ref = binder_get_ref(proc, desc, strong);
+	binder_proc_lock(proc);
+	ref = binder_get_ref_olocked(proc, desc, strong);
 	if (!ref) {
 		ret = -EINVAL;
 		goto err_no_ref;
 	}
 	if (increment)
-		ret = binder_inc_ref(ref, strong, NULL);
+		ret = binder_inc_ref_olocked(ref, strong, NULL);
 	else
-		delete_ref = binder_dec_ref(ref, strong);
+		delete_ref = binder_dec_ref_olocked(ref, strong);
 
 	if (rdata)
 		*rdata = ref->data;
+	binder_proc_unlock(proc);
 
 	if (delete_ref)
 		binder_free_ref(ref);
 	return ret;
 
 err_no_ref:
+	binder_proc_unlock(proc);
 	return ret;
 }
 
@@ -1633,15 +1633,19 @@ static int binder_inc_ref_for_node(struct binder_proc *proc,
 	struct binder_ref *new_ref = NULL;
 	int ret = 0;
 
-	ref = binder_get_ref_for_node(proc, node, NULL);
+	binder_proc_lock(proc);
+	ref = binder_get_ref_for_node_olocked(proc, node, NULL);
 	if (!ref) {
+		binder_proc_unlock(proc);
 		new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
 		if (!new_ref)
 			return -ENOMEM;
-		ref = binder_get_ref_for_node(proc, node, new_ref);
+		binder_proc_lock(proc);
+		ref = binder_get_ref_for_node_olocked(proc, node, new_ref);
 	}
-	ret = binder_inc_ref(ref, strong, target_list);
+	ret = binder_inc_ref_olocked(ref, strong, target_list);
 	*rdata = ref->data;
+	binder_proc_unlock(proc);
 	if (new_ref && ref != new_ref)
 		/*
 		 * Another thread created the ref first so
@@ -2497,11 +2501,14 @@ static void binder_transaction(struct binder_proc *proc,
 			 * stays alive until the transaction is
 			 * done.
 			 */
-			ref = binder_get_ref(proc, tr->target.handle, true);
+			binder_proc_lock(proc);
+			ref = binder_get_ref_olocked(proc, tr->target.handle,
+						     true);
 			if (ref) {
 				binder_inc_node(ref->node, 1, 0, NULL);
 				target_node = ref->node;
 			}
+			binder_proc_unlock(proc);
 			if (target_node == NULL) {
 				binder_user_error("%d:%d got transaction to invalid handle\n",
 					proc->pid, thread->pid);
@@ -3277,7 +3284,7 @@ static int binder_thread_write(struct binder_proc *proc,
 			uint32_t target;
 			binder_uintptr_t cookie;
 			struct binder_ref *ref;
-			struct binder_ref_death *death;
+			struct binder_ref_death *death = NULL;
 
 			if (get_user(target, (uint32_t __user *)ptr))
 				return -EFAULT;
@@ -3285,7 +3292,29 @@ static int binder_thread_write(struct binder_proc *proc,
 			if (get_user(cookie, (binder_uintptr_t __user *)ptr))
 				return -EFAULT;
 			ptr += sizeof(binder_uintptr_t);
-			ref = binder_get_ref(proc, target, false);
+			if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
+				/*
+				 * Allocate memory for death notification
+				 * before taking lock
+				 */
+				death = kzalloc(sizeof(*death), GFP_KERNEL);
+				if (death == NULL) {
+					WARN_ON(thread->return_error.cmd !=
+						BR_OK);
+					thread->return_error.cmd = BR_ERROR;
+					binder_enqueue_work(
+						thread->proc,
+						&thread->return_error.work,
+						&thread->todo);
+					binder_debug(
+						BINDER_DEBUG_FAILED_TRANSACTION,
+						"%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n",
+						proc->pid, thread->pid);
+					break;
+				}
+			}
+			binder_proc_lock(proc);
+			ref = binder_get_ref_olocked(proc, target, false);
 			if (ref == NULL) {
 				binder_user_error("%d:%d %s invalid ref %d\n",
 					proc->pid, thread->pid,
@@ -3293,6 +3322,8 @@ static int binder_thread_write(struct binder_proc *proc,
 					"BC_REQUEST_DEATH_NOTIFICATION" :
 					"BC_CLEAR_DEATH_NOTIFICATION",
 					target);
+				binder_proc_unlock(proc);
+				kfree(death);
 				break;
 			}
 
@@ -3310,20 +3341,8 @@ static int binder_thread_write(struct binder_proc *proc,
 				if (ref->death) {
 					binder_user_error("%d:%d BC_REQUEST_DEATH_NOTIFICATION death notification already set\n",
 						proc->pid, thread->pid);
-					break;
-				}
-				death = kzalloc(sizeof(*death), GFP_KERNEL);
-				if (death == NULL) {
-					WARN_ON(thread->return_error.cmd !=
-						BR_OK);
-					thread->return_error.cmd = BR_ERROR;
-					binder_enqueue_work(
-						thread->proc,
-						&thread->return_error.work,
-						&thread->todo);
-					binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
-						     "%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n",
-						     proc->pid, thread->pid);
+					binder_proc_unlock(proc);
+					kfree(death);
 					break;
 				}
 				binder_stats_created(BINDER_STAT_DEATH);
@@ -3356,6 +3375,7 @@ static int binder_thread_write(struct binder_proc *proc,
 					binder_user_error("%d:%d BC_CLEAR_DEATH_NOTIFICATION death notification not active\n",
 						proc->pid, thread->pid);
 					binder_node_unlock(ref->node);
+					binder_proc_unlock(proc);
 					break;
 				}
 				death = ref->death;
@@ -3365,6 +3385,7 @@ static int binder_thread_write(struct binder_proc *proc,
 						(u64)death->cookie,
 						(u64)cookie);
 					binder_node_unlock(ref->node);
+					binder_proc_unlock(proc);
 					break;
 				}
 				ref->death = NULL;
@@ -3391,6 +3412,7 @@ static int binder_thread_write(struct binder_proc *proc,
 				binder_inner_proc_unlock(proc);
 				binder_node_unlock(ref->node);
 			}
+			binder_proc_unlock(proc);
 		} break;
 		case BC_DEAD_BINDER_DONE: {
 			struct binder_work *w;
@@ -4601,14 +4623,18 @@ static void binder_deferred_release(struct binder_proc *proc)
 	binder_inner_proc_unlock(proc);
 
 	outgoing_refs = 0;
+	binder_proc_lock(proc);
 	while ((n = rb_first(&proc->refs_by_desc))) {
 		struct binder_ref *ref;
 
 		ref = rb_entry(n, struct binder_ref, rb_node_desc);
 		outgoing_refs++;
-		binder_cleanup_ref(ref);
+		binder_cleanup_ref_olocked(ref);
+		binder_proc_unlock(proc);
 		binder_free_ref(ref);
+		binder_proc_lock(proc);
 	}
+	binder_proc_unlock(proc);
 
 	binder_release_work(proc, &proc->todo);
 	binder_release_work(proc, &proc->delivered_death);
@@ -4816,8 +4842,10 @@ static void print_binder_node_nilocked(struct seq_file *m,
 	}
 }
 
-static void print_binder_ref(struct seq_file *m, struct binder_ref *ref)
+static void print_binder_ref_olocked(struct seq_file *m,
+				     struct binder_ref *ref)
 {
+	WARN_ON(!spin_is_locked(&ref->proc->outer_lock));
 	binder_node_lock(ref->node);
 	seq_printf(m, "  ref %d: desc %d %snode %d s %d w %d d %pK\n",
 		   ref->data.debug_id, ref->data.desc,
@@ -4869,11 +4897,14 @@ static void print_binder_proc(struct seq_file *m,
 		binder_put_node(last_node);
 
 	if (print_all) {
+		binder_proc_lock(proc);
 		for (n = rb_first(&proc->refs_by_desc);
 		     n != NULL;
 		     n = rb_next(n))
-			print_binder_ref(m, rb_entry(n, struct binder_ref,
-						     rb_node_desc));
+			print_binder_ref_olocked(m, rb_entry(n,
+							    struct binder_ref,
+							    rb_node_desc));
+		binder_proc_unlock(proc);
 	}
 	binder_alloc_print_allocated(m, &proc->alloc);
 	binder_inner_proc_lock(proc);
@@ -5013,6 +5044,7 @@ static void print_binder_proc_stats(struct seq_file *m,
 	count = 0;
 	strong = 0;
 	weak = 0;
+	binder_proc_lock(proc);
 	for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
 		struct binder_ref *ref = rb_entry(n, struct binder_ref,
 						  rb_node_desc);
@@ -5020,6 +5052,7 @@ static void print_binder_proc_stats(struct seq_file *m,
 		strong += ref->data.strong;
 		weak += ref->data.weak;
 	}
+	binder_proc_unlock(proc);
 	seq_printf(m, "  refs: %d s %d w %d\n", count, strong, weak);
 
 	count = binder_alloc_get_allocated_count(&proc->alloc);
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 35/37] binder: protect against stale pointers in print_binder_transaction
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (33 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 34/37] binder: protect binder_ref with outer lock Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-29 19:02 ` [PATCH 36/37] binder: fix death race conditions Todd Kjos
                   ` (2 subsequent siblings)
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

When printing transactions there were several race conditions
that could cause a stale pointer to be deferenced. Fixed by
reading the pointer once and using it if valid (which is
safe). The transaction buffer also needed protection via proc
lock, so it is only printed if we are holding the correct lock.

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 60 ++++++++++++++++++++++++++++++++----------------
 1 file changed, 40 insertions(+), 20 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index f07f0d488aa4..36ef88d10631 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -4702,35 +4702,52 @@ binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer)
 	mutex_unlock(&binder_deferred_lock);
 }
 
-static void print_binder_transaction(struct seq_file *m, const char *prefix,
-				     struct binder_transaction *t)
+static void print_binder_transaction_ilocked(struct seq_file *m,
+					     struct binder_proc *proc,
+					     const char *prefix,
+					     struct binder_transaction *t)
 {
+	struct binder_proc *to_proc;
+	struct binder_buffer *buffer = t->buffer;
+
+	WARN_ON(!spin_is_locked(&proc->inner_lock));
 	spin_lock(&t->lock);
+	to_proc = t->to_proc;
 	seq_printf(m,
 		   "%s %d: %p from %d:%d to %d:%d code %x flags %x pri %ld r%d",
 		   prefix, t->debug_id, t,
 		   t->from ? t->from->proc->pid : 0,
 		   t->from ? t->from->pid : 0,
-		   t->to_proc ? t->to_proc->pid : 0,
+		   to_proc ? to_proc->pid : 0,
 		   t->to_thread ? t->to_thread->pid : 0,
 		   t->code, t->flags, t->priority, t->need_reply);
 	spin_unlock(&t->lock);
 
-	if (t->buffer == NULL) {
+	if (proc != to_proc) {
+		/*
+		 * Can only safely deref buffer if we are holding the
+		 * correct proc inner lock for this node
+		 */
+		seq_puts(m, "\n");
+		return;
+	}
+
+	if (buffer == NULL) {
 		seq_puts(m, " buffer free\n");
 		return;
 	}
-	if (t->buffer->target_node)
-		seq_printf(m, " node %d",
-			   t->buffer->target_node->debug_id);
+	if (buffer->target_node)
+		seq_printf(m, " node %d", buffer->target_node->debug_id);
 	seq_printf(m, " size %zd:%zd data %p\n",
-		   t->buffer->data_size, t->buffer->offsets_size,
-		   t->buffer->data);
+		   buffer->data_size, buffer->offsets_size,
+		   buffer->data);
 }
 
-static void print_binder_work_ilocked(struct seq_file *m, const char *prefix,
-				      const char *transaction_prefix,
-				      struct binder_work *w)
+static void print_binder_work_ilocked(struct seq_file *m,
+				     struct binder_proc *proc,
+				     const char *prefix,
+				     const char *transaction_prefix,
+				     struct binder_work *w)
 {
 	struct binder_node *node;
 	struct binder_transaction *t;
@@ -4738,7 +4755,8 @@ static void print_binder_work_ilocked(struct seq_file *m, const char *prefix,
 	switch (w->type) {
 	case BINDER_WORK_TRANSACTION:
 		t = container_of(w, struct binder_transaction, work);
-		print_binder_transaction(m, transaction_prefix, t);
+		print_binder_transaction_ilocked(
+				m, proc, transaction_prefix, t);
 		break;
 	case BINDER_WORK_RETURN_ERROR: {
 		struct binder_error *e = container_of(
@@ -4789,20 +4807,21 @@ static void print_binder_thread_ilocked(struct seq_file *m,
 	t = thread->transaction_stack;
 	while (t) {
 		if (t->from == thread) {
-			print_binder_transaction(m,
-						 "    outgoing transaction", t);
+			print_binder_transaction_ilocked(m, thread->proc,
+					"    outgoing transaction", t);
 			t = t->from_parent;
 		} else if (t->to_thread == thread) {
-			print_binder_transaction(m,
+			print_binder_transaction_ilocked(m, thread->proc,
 						 "    incoming transaction", t);
 			t = t->to_parent;
 		} else {
-			print_binder_transaction(m, "    bad transaction", t);
+			print_binder_transaction_ilocked(m, thread->proc,
+					"    bad transaction", t);
 			t = NULL;
 		}
 	}
 	list_for_each_entry(w, &thread->todo, entry) {
-		print_binder_work_ilocked(m, "    ",
+		print_binder_work_ilocked(m, thread->proc, "    ",
 					  "    pending transaction", w);
 	}
 	if (!print_always && m->count == header_pos)
@@ -4837,7 +4856,7 @@ static void print_binder_node_nilocked(struct seq_file *m,
 	seq_puts(m, "\n");
 	if (node->proc) {
 		list_for_each_entry(w, &node->async_todo, entry)
-			print_binder_work_ilocked(m, "    ",
+			print_binder_work_ilocked(m, node->proc, "    ",
 					  "    pending async transaction", w);
 	}
 }
@@ -4909,7 +4928,8 @@ static void print_binder_proc(struct seq_file *m,
 	binder_alloc_print_allocated(m, &proc->alloc);
 	binder_inner_proc_lock(proc);
 	list_for_each_entry(w, &proc->todo, entry)
-		print_binder_work_ilocked(m, "  ", "  pending transaction", w);
+		print_binder_work_ilocked(m, proc, "  ",
+					  "  pending transaction", w);
 	list_for_each_entry(w, &proc->delivered_death, entry) {
 		seq_puts(m, "  has delivered dead binder\n");
 		break;
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 36/37] binder: fix death race conditions
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (34 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 35/37] binder: protect against stale pointers in print_binder_transaction Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-30  6:05   ` Greg KH
  2017-06-29 19:02 ` [PATCH 37/37] binder: remove global binder lock Todd Kjos
  2017-06-30  6:04 ` [PATCH 00/37] fine-grained locking in binder driver Greg KH
  37 siblings, 1 reply; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

From: Martijn Coenen <maco@google.com>

A race existed where one thread could register
a death notification for a node, while another
thread was cleaning up that node and sending
out death notifications for its references,
causing simultaneous access to ref->death
because different locks were held.

Signed-off-by: Martijn Coenen <maco@google.com>
---
 drivers/android/binder.c | 64 ++++++++++++++++++++++++------------------------
 1 file changed, 32 insertions(+), 32 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 36ef88d10631..1e50b034d49a 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -442,6 +442,7 @@ struct binder_ref_data {
  *               ref for deletion in binder_cleanup_ref, a non-NULL
  *               @node indicates the node must be freed
  * @death:       pointer to death notification (ref_death) if requested
+ *               (protected by @node->lock)
  *
  * Structure to track references from procA to target node (on procB). This
  * structure is unsafe to access without holding @proc->outer_lock.
@@ -3337,10 +3338,12 @@ static int binder_thread_write(struct binder_proc *proc,
 				     ref->data.desc, ref->data.strong,
 				     ref->data.weak, ref->node->debug_id);
 
+			binder_node_lock(ref->node);
 			if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
 				if (ref->death) {
 					binder_user_error("%d:%d BC_REQUEST_DEATH_NOTIFICATION death notification already set\n",
 						proc->pid, thread->pid);
+					binder_node_unlock(ref->node);
 					binder_proc_unlock(proc);
 					kfree(death);
 					break;
@@ -3349,7 +3352,6 @@ static int binder_thread_write(struct binder_proc *proc,
 				INIT_LIST_HEAD(&death->work.entry);
 				death->cookie = cookie;
 				ref->death = death;
-				binder_node_lock(ref->node);
 				if (ref->node->proc == NULL) {
 					ref->death->work.type = BINDER_WORK_DEAD_BINDER;
 					if (thread->looper &
@@ -3368,9 +3370,7 @@ static int binder_thread_write(struct binder_proc *proc,
 								&proc->wait);
 					}
 				}
-				binder_node_unlock(ref->node);
 			} else {
-				binder_node_lock(ref->node);
 				if (ref->death == NULL) {
 					binder_user_error("%d:%d BC_CLEAR_DEATH_NOTIFICATION death notification not active\n",
 						proc->pid, thread->pid);
@@ -3410,8 +3410,8 @@ static int binder_thread_write(struct binder_proc *proc,
 					death->work.type = BINDER_WORK_DEAD_BINDER_AND_CLEAR;
 				}
 				binder_inner_proc_unlock(proc);
-				binder_node_unlock(ref->node);
 			}
+			binder_node_unlock(ref->node);
 			binder_proc_unlock(proc);
 		} break;
 		case BC_DEAD_BINDER_DONE: {
@@ -3748,44 +3748,39 @@ static int binder_thread_read(struct binder_proc *proc,
 		case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
 			struct binder_ref_death *death;
 			uint32_t cmd;
+			binder_uintptr_t cookie;
 
 			death = container_of(w, struct binder_ref_death, work);
 			if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
 				cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;
 			else
 				cmd = BR_DEAD_BINDER;
-			/*
-			 * TODO: there is a race condition between
-			 * death notification requests and delivery
-			 * of the notifications. This will be handled
-			 * in a later patch.
-			 */
-			binder_inner_proc_unlock(proc);
-			if (put_user(cmd, (uint32_t __user *)ptr))
-				return -EFAULT;
-			ptr += sizeof(uint32_t);
-			if (put_user(death->cookie,
-				     (binder_uintptr_t __user *)ptr))
-				return -EFAULT;
-			ptr += sizeof(binder_uintptr_t);
-			binder_stat_br(proc, thread, cmd);
+			cookie = death->cookie;
+
 			binder_debug(BINDER_DEBUG_DEATH_NOTIFICATION,
 				     "%d:%d %s %016llx\n",
 				      proc->pid, thread->pid,
 				      cmd == BR_DEAD_BINDER ?
 				      "BR_DEAD_BINDER" :
 				      "BR_CLEAR_DEATH_NOTIFICATION_DONE",
-				      (u64)death->cookie);
-
+				      (u64)cookie);
 			if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
+				binder_inner_proc_unlock(proc);
 				kfree(death);
 				binder_stats_deleted(BINDER_STAT_DEATH);
 			} else {
-				binder_inner_proc_lock(proc);
 				binder_enqueue_work_ilocked(
 						w, &proc->delivered_death);
 				binder_inner_proc_unlock(proc);
 			}
+			if (put_user(cmd, (uint32_t __user *)ptr))
+				return -EFAULT;
+			ptr += sizeof(uint32_t);
+			if (put_user(cookie,
+				     (binder_uintptr_t __user *)ptr))
+				return -EFAULT;
+			ptr += sizeof(binder_uintptr_t);
+			binder_stat_br(proc, thread, cmd);
 			if (cmd == BR_DEAD_BINDER)
 				goto done; /* DEAD_BINDER notifications can cause transactions */
 		} break;
@@ -4535,20 +4530,25 @@ static int binder_node_release(struct binder_node *node, int refs)
 
 	hlist_for_each_entry(ref, &node->refs, node_entry) {
 		refs++;
-
-		if (!ref->death)
+		/*
+		 * Need the node lock to synchronize
+		 * with new notification requests and the
+		 * inner lock to synchronize with queued
+		 * death notifications.
+		 */
+		binder_inner_proc_lock(ref->proc);
+		if (!ref->death) {
+			binder_inner_proc_unlock(ref->proc);
 			continue;
+		}
 
 		death++;
 
-		binder_inner_proc_lock(ref->proc);
-		if (list_empty(&ref->death->work.entry)) {
-			ref->death->work.type = BINDER_WORK_DEAD_BINDER;
-			binder_enqueue_work_ilocked(&ref->death->work,
-						    &ref->proc->todo);
-			wake_up_interruptible(&ref->proc->wait);
-		} else
-			BUG();
+		BUG_ON(!list_empty(&ref->death->work.entry));
+		ref->death->work.type = BINDER_WORK_DEAD_BINDER;
+		binder_enqueue_work_ilocked(&ref->death->work,
+					    &ref->proc->todo);
+		wake_up_interruptible(&ref->proc->wait);
 		binder_inner_proc_unlock(ref->proc);
 	}
 
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 37/37] binder: remove global binder lock
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (35 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 36/37] binder: fix death race conditions Todd Kjos
@ 2017-06-29 19:02 ` Todd Kjos
  2017-06-30  6:04 ` [PATCH 00/37] fine-grained locking in binder driver Greg KH
  37 siblings, 0 replies; 58+ messages in thread
From: Todd Kjos @ 2017-06-29 19:02 UTC (permalink / raw)
  To: gregkh, arve, devel, linux-kernel, maco, tkjos

Remove global mutex and rely on fine-grained locking

Signed-off-by: Todd Kjos <tkjos@google.com>
---
 drivers/android/binder.c | 46 +++-------------------------------------------
 1 file changed, 3 insertions(+), 43 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 1e50b034d49a..c8e9d5dfcd86 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -80,8 +80,6 @@
 #include "binder_alloc.h"
 #include "binder_trace.h"
 
-static DEFINE_MUTEX(binder_main_lock);
-
 static HLIST_HEAD(binder_deferred_list);
 static DEFINE_MUTEX(binder_deferred_lock);
 
@@ -924,19 +922,6 @@ static long task_close_fd(struct binder_proc *proc, unsigned int fd)
 	return retval;
 }
 
-static inline void binder_lock(const char *tag)
-{
-	trace_binder_lock(tag);
-	mutex_lock(&binder_main_lock);
-	trace_binder_locked(tag);
-}
-
-static inline void binder_unlock(const char *tag)
-{
-	trace_binder_unlock(tag);
-	mutex_unlock(&binder_main_lock);
-}
-
 static void binder_set_nice(long nice)
 {
 	long min_nice;
@@ -3557,8 +3542,6 @@ static int binder_thread_read(struct binder_proc *proc,
 
 	thread->looper |= BINDER_LOOPER_STATE_WAITING;
 
-	binder_unlock(__func__);
-
 	trace_binder_wait_for_work(wait_for_proc_work,
 				   !!thread->transaction_stack,
 				   !binder_worklist_empty(proc, &thread->todo));
@@ -3584,8 +3567,6 @@ static int binder_thread_read(struct binder_proc *proc,
 			ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
 	}
 
-	binder_lock(__func__);
-
 	binder_inner_proc_lock(proc);
 	if (wait_for_proc_work)
 		proc->ready_threads--;
@@ -4101,8 +4082,6 @@ static unsigned int binder_poll(struct file *filp,
 	struct binder_thread *thread = NULL;
 	int wait_for_proc_work;
 
-	binder_lock(__func__);
-
 	thread = binder_get_thread(proc);
 
 	binder_inner_proc_lock(thread->proc);
@@ -4110,8 +4089,6 @@ static unsigned int binder_poll(struct file *filp,
 		binder_worklist_empty_ilocked(&thread->todo);
 	binder_inner_proc_unlock(thread->proc);
 
-	binder_unlock(__func__);
-
 	if (wait_for_proc_work) {
 		if (binder_has_proc_work(proc, thread))
 			return POLLIN;
@@ -4256,7 +4233,6 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 	if (ret)
 		goto err_unlocked;
 
-	binder_lock(__func__);
 	thread = binder_get_thread(proc);
 	if (thread == NULL) {
 		ret = -ENOMEM;
@@ -4315,7 +4291,6 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 err:
 	if (thread)
 		thread->looper_need_return = false;
-	binder_unlock(__func__);
 	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
 	if (ret && ret != -ERESTARTSYS)
 		pr_info("%d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
@@ -4421,15 +4396,11 @@ static int binder_open(struct inode *nodp, struct file *filp)
 	proc->context = &binder_dev->context;
 	binder_alloc_init(&proc->alloc);
 
-	binder_lock(__func__);
-
 	binder_stats_created(BINDER_STAT_PROC);
 	proc->pid = current->group_leader->pid;
 	INIT_LIST_HEAD(&proc->delivered_death);
 	filp->private_data = proc;
 
-	binder_unlock(__func__);
-
 	mutex_lock(&binder_procs_lock);
 	hlist_add_head(&proc->proc_node, &binder_procs);
 	mutex_unlock(&binder_procs_lock);
@@ -4655,7 +4626,6 @@ static void binder_deferred_func(struct work_struct *work)
 	int defer;
 
 	do {
-		binder_lock(__func__);
 		mutex_lock(&binder_deferred_lock);
 		if (!hlist_empty(&binder_deferred_list)) {
 			proc = hlist_entry(binder_deferred_list.first,
@@ -4682,7 +4652,6 @@ static void binder_deferred_func(struct work_struct *work)
 		if (defer & BINDER_DEFERRED_RELEASE)
 			binder_deferred_release(proc); /* frees proc */
 
-		binder_unlock(__func__);
 		if (files)
 			put_files_struct(files);
 	} while (proc);
@@ -5097,8 +5066,6 @@ static int binder_state_show(struct seq_file *m, void *unused)
 	struct binder_node *node;
 	struct binder_node *last_node = NULL;
 
-	binder_lock(__func__);
-
 	seq_puts(m, "binder state:\n");
 
 	spin_lock(&binder_dead_nodes_lock);
@@ -5128,7 +5095,7 @@ static int binder_state_show(struct seq_file *m, void *unused)
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
 		print_binder_proc(m, proc, 1);
 	mutex_unlock(&binder_procs_lock);
-	binder_unlock(__func__);
+
 	return 0;
 }
 
@@ -5136,8 +5103,6 @@ static int binder_stats_show(struct seq_file *m, void *unused)
 {
 	struct binder_proc *proc;
 
-	binder_lock(__func__);
-
 	seq_puts(m, "binder stats:\n");
 
 	print_binder_stats(m, "", &binder_stats);
@@ -5146,7 +5111,7 @@ static int binder_stats_show(struct seq_file *m, void *unused)
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
 		print_binder_proc_stats(m, proc);
 	mutex_unlock(&binder_procs_lock);
-	binder_unlock(__func__);
+
 	return 0;
 }
 
@@ -5154,14 +5119,12 @@ static int binder_transactions_show(struct seq_file *m, void *unused)
 {
 	struct binder_proc *proc;
 
-	binder_lock(__func__);
-
 	seq_puts(m, "binder transactions:\n");
 	mutex_lock(&binder_procs_lock);
 	hlist_for_each_entry(proc, &binder_procs, proc_node)
 		print_binder_proc(m, proc, 0);
 	mutex_unlock(&binder_procs_lock);
-	binder_unlock(__func__);
+
 	return 0;
 }
 
@@ -5170,8 +5133,6 @@ static int binder_proc_show(struct seq_file *m, void *unused)
 	struct binder_proc *itr;
 	int pid = (unsigned long)m->private;
 
-	binder_lock(__func__);
-
 	mutex_lock(&binder_procs_lock);
 	hlist_for_each_entry(itr, &binder_procs, proc_node) {
 		if (itr->pid == pid) {
@@ -5181,7 +5142,6 @@ static int binder_proc_show(struct seq_file *m, void *unused)
 	}
 	mutex_unlock(&binder_procs_lock);
 
-	binder_unlock(__func__);
 	return 0;
 }
 
-- 
2.13.2.725.g09c95d1e9-goog

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH 00/37] fine-grained locking in binder driver
  2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
                   ` (36 preceding siblings ...)
  2017-06-29 19:02 ` [PATCH 37/37] binder: remove global binder lock Todd Kjos
@ 2017-06-30  6:04 ` Greg KH
  2017-07-17 12:49   ` Greg KH
  37 siblings, 1 reply; 58+ messages in thread
From: Greg KH @ 2017-06-30  6:04 UTC (permalink / raw)
  To: Todd Kjos; +Cc: arve, devel, linux-kernel, maco, tkjos

On Thu, Jun 29, 2017 at 12:01:34PM -0700, Todd Kjos wrote:
> The binder driver uses a global mutex to serialize access to state in a
> multi-threaded environment. This global lock has been increasingly
> problematic as Android devices have scaled to more cores. The problem is
> not so much contention for the global lock which still remains relatively
> low, but the priority inversion which occurs regularly when a lower
> priority thread is preempted while holding the lock and a higher priority
> thread becomes blocked on it. These cases can be especially painful if the
> lower priority thread runs in the background on a slow core at a low
> frequency. This often manifests as missed frames or other glitches.
> 
> For several years, a hacky solution has been used in many Android devices
> which disables preemption for most of the time the global mutex is held.
> This dramatically decreased the cases of glitches induced by priority
> inversion and increased the average throughput for binder transactions.
> 
> Moving to fine-grained locking in this patchset results is a cleaner
> and more scalable solution than the preempt disable hack. Priority
> inversion is decreased significantly.

Yeah, it is great to see this work done!  I'll review these on Monday,
sorry, am traveling at the moment...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 36/37] binder: fix death race conditions
  2017-06-29 19:02 ` [PATCH 36/37] binder: fix death race conditions Todd Kjos
@ 2017-06-30  6:05   ` Greg KH
  0 siblings, 0 replies; 58+ messages in thread
From: Greg KH @ 2017-06-30  6:05 UTC (permalink / raw)
  To: Todd Kjos; +Cc: arve, devel, linux-kernel, maco, tkjos

On Thu, Jun 29, 2017 at 12:02:10PM -0700, Todd Kjos wrote:
> From: Martijn Coenen <maco@google.com>
> 
> A race existed where one thread could register
> a death notification for a node, while another
> thread was cleaning up that node and sending
> out death notifications for its references,
> causing simultaneous access to ref->death
> because different locks were held.
> 
> Signed-off-by: Martijn Coenen <maco@google.com>

Note, if you forward on a patch from someone else, like this, you should
also put your signed-off-by: on it, as you hopefully are agreeing that
it is ok to forward on.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 01/37] Revert "android: binder: Sanity check at binder ioctl"
  2017-06-29 19:01 ` [PATCH 01/37] Revert "android: binder: Sanity check at binder ioctl" Todd Kjos
@ 2017-07-03  9:17   ` Greg KH
       [not found]     ` <CAHRSSExh9JX5xiSRig55DSei31C_BPSasOKB+BTC6jjjuZ+ZpA@mail.gmail.com>
  0 siblings, 1 reply; 58+ messages in thread
From: Greg KH @ 2017-07-03  9:17 UTC (permalink / raw)
  To: Todd Kjos; +Cc: arve, devel, linux-kernel, maco, tkjos

On Thu, Jun 29, 2017 at 12:01:35PM -0700, Todd Kjos wrote:
> This reverts commit a906d6931f3ccaf7de805643190765ddd7378e27.
> 

I need a reason why this is to be reverted here :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-06-29 19:01 ` [PATCH 02/37] binder: use group leader instead of open thread Todd Kjos
@ 2017-07-03  9:17   ` Greg KH
       [not found]     ` <CAHRSSEyH3t0igLJqcC4e-HR68RH0bg4T310jnRHZzrMChoOeOg@mail.gmail.com>
  2017-07-24 21:00   ` John Stultz
  1 sibling, 1 reply; 58+ messages in thread
From: Greg KH @ 2017-07-03  9:17 UTC (permalink / raw)
  To: Todd Kjos; +Cc: arve, devel, linux-kernel, maco, tkjos

On Thu, Jun 29, 2017 at 12:01:36PM -0700, Todd Kjos wrote:
> The binder allocator assumes that the thread that
> called binder_open will never die for the lifetime of
> that proc. That thread is normally the group_leader,
> however it may not be. Use the group_leader instead
> of current.
> 
> Signed-off-by: Todd Kjos <tkjos@google.com>
> ---
>  drivers/android/binder.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

This should be backported to older kernels, right?  How far back?  Same
for patch 01/37, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 03/37] binder: Use wake up hint for synchronous transactions.
  2017-06-29 19:01 ` [PATCH 03/37] binder: Use wake up hint for synchronous transactions Todd Kjos
@ 2017-07-03  9:18   ` Greg KH
  0 siblings, 0 replies; 58+ messages in thread
From: Greg KH @ 2017-07-03  9:18 UTC (permalink / raw)
  To: Todd Kjos
  Cc: arve, devel, linux-kernel, maco, tkjos, Riley Andrews, Omprakash Dhyade

On Thu, Jun 29, 2017 at 12:01:37PM -0700, Todd Kjos wrote:
> From: Riley Andrews <riandrews@google.com>
> 
> Use wake_up_interruptible_sync() to hint to the scheduler binder
> transactions are synchronous wakeups. Disable preemption while waking
> to avoid ping-ponging on the binder lock.
> 
> Signed-off-by: Todd Kjos <tkjos@google.com>
> Signed-off-by: Omprakash Dhyade <odhyade@codeaurora.org>

No signed-off-by from Riley?  Why not?  (just curious, not a requirement
here, but generally good practice.)

> ---
>  drivers/android/binder.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> index 9393924ae8e8..f7665c31feca 100644
> --- a/drivers/android/binder.c
> +++ b/drivers/android/binder.c
> @@ -2200,8 +2200,12 @@ static void binder_transaction(struct binder_proc *proc,
>  	list_add_tail(&t->work.entry, target_list);
>  	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
>  	list_add_tail(&tcomplete->entry, &thread->todo);
> -	if (target_wait)
> -		wake_up_interruptible(target_wait);
> +	if (target_wait) {
> +		if (reply || !(t->flags & TF_ONE_WAY))
> +			wake_up_interruptible_sync(target_wait);
> +		else
> +			wake_up_interruptible(target_wait);
> +	}

This too should be backported, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 01/37] Revert "android: binder: Sanity check at binder ioctl"
       [not found]     ` <CAHRSSExh9JX5xiSRig55DSei31C_BPSasOKB+BTC6jjjuZ+ZpA@mail.gmail.com>
@ 2017-07-05 18:47       ` Greg KH
  0 siblings, 0 replies; 58+ messages in thread
From: Greg KH @ 2017-07-05 18:47 UTC (permalink / raw)
  To: Todd Kjos
  Cc: Todd Kjos, Arve Hjønnevåg, devel, LKML, Martijn Coenen

On Wed, Jul 05, 2017 at 09:08:48AM -0700, Todd Kjos wrote:
> It is reverted here because if we keep it, it would need to be refactored as
> part of this patchset since vma_vm_mm is no longer in struct binder_proc (moved
> to binder_alloc). The patch introduces a race condition in the driver and an
> attempt to fix it was submitted in "[PATCH v2] android: binder: fix dangling
> pointer comparison" and the conclusion then was to revert this patch. After
> discussing with Arve we decided reverting now was better than refactoring it to
> work with FGL.
> 
> Do you want me to add rationale to the commit message?

Yes, please do :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
       [not found]     ` <CAHRSSEyH3t0igLJqcC4e-HR68RH0bg4T310jnRHZzrMChoOeOg@mail.gmail.com>
@ 2017-07-05 18:47       ` Greg KH
  2017-07-07 18:23         ` Todd Kjos
  0 siblings, 1 reply; 58+ messages in thread
From: Greg KH @ 2017-07-05 18:47 UTC (permalink / raw)
  To: Todd Kjos
  Cc: Todd Kjos, Arve Hjønnevåg, devel, LKML, Martijn Coenen

On Wed, Jul 05, 2017 at 09:13:16AM -0700, Todd Kjos wrote:
> Yes, this one back to 4.4. 01/37 should go to 4.9 (its not in 4.4).

Great, if this gets a respin, can you add it?  If not, I'll try to
remember it :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-05 18:47       ` Greg KH
@ 2017-07-07 18:23         ` Todd Kjos
  2017-07-07 18:29           ` Greg KH
  0 siblings, 1 reply; 58+ messages in thread
From: Todd Kjos @ 2017-07-07 18:23 UTC (permalink / raw)
  To: Greg KH; +Cc: Todd Kjos, Arve Hjønnevåg, devel, LKML, Martijn Coenen

I suspect there won't be a respin. I'll ping you later if you don't
remember it yourself ;)

On Wed, Jul 5, 2017 at 11:47 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Wed, Jul 05, 2017 at 09:13:16AM -0700, Todd Kjos wrote:
>> Yes, this one back to 4.4. 01/37 should go to 4.9 (its not in 4.4).
>
> Great, if this gets a respin, can you add it?  If not, I'll try to
> remember it :)
>
> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-07 18:23         ` Todd Kjos
@ 2017-07-07 18:29           ` Greg KH
  0 siblings, 0 replies; 58+ messages in thread
From: Greg KH @ 2017-07-07 18:29 UTC (permalink / raw)
  To: Todd Kjos
  Cc: Todd Kjos, Arve Hjønnevåg, devel, LKML, Martijn Coenen

On Fri, Jul 07, 2017 at 11:23:16AM -0700, Todd Kjos wrote:
> I suspect there won't be a respin. I'll ping you later if you don't
> remember it yourself ;)

Ok, not a problem, I can't do anything with these until after 4.13-rc1
is out, so it will be at least a week or so until I get to them...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 00/37] fine-grained locking in binder driver
  2017-06-30  6:04 ` [PATCH 00/37] fine-grained locking in binder driver Greg KH
@ 2017-07-17 12:49   ` Greg KH
  0 siblings, 0 replies; 58+ messages in thread
From: Greg KH @ 2017-07-17 12:49 UTC (permalink / raw)
  To: Todd Kjos; +Cc: devel, arve, maco, linux-kernel, tkjos

On Fri, Jun 30, 2017 at 08:04:21AM +0200, Greg KH wrote:
> On Thu, Jun 29, 2017 at 12:01:34PM -0700, Todd Kjos wrote:
> > The binder driver uses a global mutex to serialize access to state in a
> > multi-threaded environment. This global lock has been increasingly
> > problematic as Android devices have scaled to more cores. The problem is
> > not so much contention for the global lock which still remains relatively
> > low, but the priority inversion which occurs regularly when a lower
> > priority thread is preempted while holding the lock and a higher priority
> > thread becomes blocked on it. These cases can be especially painful if the
> > lower priority thread runs in the background on a slow core at a low
> > frequency. This often manifests as missed frames or other glitches.
> > 
> > For several years, a hacky solution has been used in many Android devices
> > which disables preemption for most of the time the global mutex is held.
> > This dramatically decreased the cases of glitches induced by priority
> > inversion and increased the average throughput for binder transactions.
> > 
> > Moving to fine-grained locking in this patchset results is a cleaner
> > and more scalable solution than the preempt disable hack. Priority
> > inversion is decreased significantly.
> 
> Yeah, it is great to see this work done!  I'll review these on Monday,
> sorry, am traveling at the moment...

All now queued up in my trees.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-06-29 19:01 ` [PATCH 02/37] binder: use group leader instead of open thread Todd Kjos
  2017-07-03  9:17   ` Greg KH
@ 2017-07-24 21:00   ` John Stultz
  2017-07-24 21:07     ` John Stultz
  2017-07-24 21:23     ` Greg Kroah-Hartman
  1 sibling, 2 replies; 58+ messages in thread
From: John Stultz @ 2017-07-24 21:00 UTC (permalink / raw)
  To: Todd Kjos
  Cc: Greg Kroah-Hartman, Arve Hjønnevåg, devel,
	Linux Kernel Mailing List, Martijn Coenen, Todd Kjos

On Thu, Jun 29, 2017 at 12:01 PM, Todd Kjos <tkjos@android.com> wrote:
> The binder allocator assumes that the thread that
> called binder_open will never die for the lifetime of
> that proc. That thread is normally the group_leader,
> however it may not be. Use the group_leader instead
> of current.
>
> Signed-off-by: Todd Kjos <tkjos@google.com>
> ---
>  drivers/android/binder.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> index 157bd3e49ff4..9393924ae8e8 100644
> --- a/drivers/android/binder.c
> +++ b/drivers/android/binder.c
> @@ -3460,8 +3460,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
>         proc = kzalloc(sizeof(*proc), GFP_KERNEL);
>         if (proc == NULL)
>                 return -ENOMEM;
> -       get_task_struct(current);
> -       proc->tsk = current;
> +       get_task_struct(current->group_leader);
> +       proc->tsk = current->group_leader;
>         INIT_LIST_HEAD(&proc->todo);
>         init_waitqueue_head(&proc->wait);
>         proc->default_priority = task_nice(current);
> --

So this patch landed in 4.13-rc2 (c4ea41ba195d), and seems to be
causing a regression for me w/ HiKey. With it, I'm getting crashes
with the bluetooth and wifi HALs.  Reverting this patch seems to
resolve the issue

I suspect some other dependency from the original patchset is missing?

thanks
-john

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-24 21:00   ` John Stultz
@ 2017-07-24 21:07     ` John Stultz
  2017-07-25  9:13       ` Martijn Coenen
  2017-07-24 21:23     ` Greg Kroah-Hartman
  1 sibling, 1 reply; 58+ messages in thread
From: John Stultz @ 2017-07-24 21:07 UTC (permalink / raw)
  To: Todd Kjos
  Cc: Greg Kroah-Hartman, Arve Hjønnevåg, devel,
	Linux Kernel Mailing List, Martijn Coenen, Todd Kjos

On Mon, Jul 24, 2017 at 2:00 PM, John Stultz <john.stultz@linaro.org> wrote:
> On Thu, Jun 29, 2017 at 12:01 PM, Todd Kjos <tkjos@android.com> wrote:
>> The binder allocator assumes that the thread that
>> called binder_open will never die for the lifetime of
>> that proc. That thread is normally the group_leader,
>> however it may not be. Use the group_leader instead
>> of current.
>>
>> Signed-off-by: Todd Kjos <tkjos@google.com>
>> ---
>>  drivers/android/binder.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
>> index 157bd3e49ff4..9393924ae8e8 100644
>> --- a/drivers/android/binder.c
>> +++ b/drivers/android/binder.c
>> @@ -3460,8 +3460,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
>>         proc = kzalloc(sizeof(*proc), GFP_KERNEL);
>>         if (proc == NULL)
>>                 return -ENOMEM;
>> -       get_task_struct(current);
>> -       proc->tsk = current;
>> +       get_task_struct(current->group_leader);
>> +       proc->tsk = current->group_leader;
>>         INIT_LIST_HEAD(&proc->todo);
>>         init_waitqueue_head(&proc->wait);
>>         proc->default_priority = task_nice(current);
>> --
>
> So this patch landed in 4.13-rc2 (c4ea41ba195d), and seems to be
> causing a regression for me w/ HiKey. With it, I'm getting crashes
> with the bluetooth and wifi HALs.  Reverting this patch seems to
> resolve the issue
>
> I suspect some other dependency from the original patchset is missing?

As for the crash, the logcat details show:

12-31 16:00:36.632  2518  2584 E hw-ProcessState: Using /dev/hwbinder
failed: unable to mmap transaction memory.
12-31 16:00:36.632  2518  2566 D bt_hci  : hci_module_start_up
starting async portion
12-31 16:00:36.632  2518  2584 E
android.hardware.bluetooth@1.0::BluetoothHci: getService:
defaultServiceManager()->getTransport returns
Status(EX_TRANSACTION_FAILED): '-9 Bad file descriptor: '
12-31 16:00:36.633  2518  2584 F         :
[1231/160036:FATAL:hci_layer_android.cc(109)] Check failed: btHci !=
nullptr.
12-31 16:00:36.634  2518  2584 F libc    : Fatal signal 6 (SIGABRT),
code -6 in tid 2584 (hci_thread)
...
12-31 16:00:38.027  2009  2061 E SupplicantStaIfaceHal: Exception
while trying to register a listener for ISupplicant service:
android.os.RemoteException: HwBinder Error: (-2147483648)
12-31 16:00:38.027  2009  2061 E WifiMonitor: startMonitoring(wlan0) failed!
12-31 16:00:38.028  2009  2061 E SupplicantStaIfaceHal: Can't call
setDebugParams, ISupplicant is null
12-31 16:00:38.030  2009  2061 D WifiConfigStore: Reading from stores
completed in 2 ms.
12-31 16:00:38.034  2009  2061 D WIFI    : Registering NetworkFactory
12-31 16:00:38.035  2009  2061 D WIFI_UT : Registering NetworkFactory
12-31 16:00:38.035  2009  2065 D ConnectivityService: Got
NetworkFactory Messenger for WIFI
12-31 16:00:38.035  2009  2065 D ConnectivityService: Got
NetworkFactory Messenger for WIFI_UT
12-31 16:00:38.037  2009  2061 D WifiConfigStore: Reading from user
store completed in 2 ms.
12-31 16:00:38.055  2009  2061 D WifiConfigStore: Writing to stores
completed in 17 ms.
12-31 16:00:38.055  2009  2061 E WifiStateMachine: Failed to setup
control channel, restart supplicant
...
etc.

thanks
-john

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-24 21:00   ` John Stultz
  2017-07-24 21:07     ` John Stultz
@ 2017-07-24 21:23     ` Greg Kroah-Hartman
  2017-07-24 21:53       ` John Stultz
  1 sibling, 1 reply; 58+ messages in thread
From: Greg Kroah-Hartman @ 2017-07-24 21:23 UTC (permalink / raw)
  To: John Stultz
  Cc: Todd Kjos, devel, Linux Kernel Mailing List,
	Arve Hjønnevåg, Martijn Coenen, Todd Kjos

On Mon, Jul 24, 2017 at 02:00:45PM -0700, John Stultz wrote:
> On Thu, Jun 29, 2017 at 12:01 PM, Todd Kjos <tkjos@android.com> wrote:
> > The binder allocator assumes that the thread that
> > called binder_open will never die for the lifetime of
> > that proc. That thread is normally the group_leader,
> > however it may not be. Use the group_leader instead
> > of current.
> >
> > Signed-off-by: Todd Kjos <tkjos@google.com>
> > ---
> >  drivers/android/binder.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> > index 157bd3e49ff4..9393924ae8e8 100644
> > --- a/drivers/android/binder.c
> > +++ b/drivers/android/binder.c
> > @@ -3460,8 +3460,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
> >         proc = kzalloc(sizeof(*proc), GFP_KERNEL);
> >         if (proc == NULL)
> >                 return -ENOMEM;
> > -       get_task_struct(current);
> > -       proc->tsk = current;
> > +       get_task_struct(current->group_leader);
> > +       proc->tsk = current->group_leader;
> >         INIT_LIST_HEAD(&proc->todo);
> >         init_waitqueue_head(&proc->wait);
> >         proc->default_priority = task_nice(current);
> > --
> 
> So this patch landed in 4.13-rc2 (c4ea41ba195d), and seems to be
> causing a regression for me w/ HiKey. With it, I'm getting crashes
> with the bluetooth and wifi HALs.  Reverting this patch seems to
> resolve the issue
> 
> I suspect some other dependency from the original patchset is missing?

If you use linux-next (or all of the patches in this series), does the
problem go away?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-24 21:23     ` Greg Kroah-Hartman
@ 2017-07-24 21:53       ` John Stultz
  0 siblings, 0 replies; 58+ messages in thread
From: John Stultz @ 2017-07-24 21:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Todd Kjos, devel, Linux Kernel Mailing List,
	Arve Hjønnevåg, Martijn Coenen, Todd Kjos

On Mon, Jul 24, 2017 at 2:23 PM, Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:
> On Mon, Jul 24, 2017 at 02:00:45PM -0700, John Stultz wrote:
>> On Thu, Jun 29, 2017 at 12:01 PM, Todd Kjos <tkjos@android.com> wrote:
>> > The binder allocator assumes that the thread that
>> > called binder_open will never die for the lifetime of
>> > that proc. That thread is normally the group_leader,
>> > however it may not be. Use the group_leader instead
>> > of current.
>> >
>> > Signed-off-by: Todd Kjos <tkjos@google.com>
>> > ---
>> >  drivers/android/binder.c | 4 ++--
>> >  1 file changed, 2 insertions(+), 2 deletions(-)
>> >
>> > diff --git a/drivers/android/binder.c b/drivers/android/binder.c
>> > index 157bd3e49ff4..9393924ae8e8 100644
>> > --- a/drivers/android/binder.c
>> > +++ b/drivers/android/binder.c
>> > @@ -3460,8 +3460,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
>> >         proc = kzalloc(sizeof(*proc), GFP_KERNEL);
>> >         if (proc == NULL)
>> >                 return -ENOMEM;
>> > -       get_task_struct(current);
>> > -       proc->tsk = current;
>> > +       get_task_struct(current->group_leader);
>> > +       proc->tsk = current->group_leader;
>> >         INIT_LIST_HEAD(&proc->todo);
>> >         init_waitqueue_head(&proc->wait);
>> >         proc->default_priority = task_nice(current);
>> > --
>>
>> So this patch landed in 4.13-rc2 (c4ea41ba195d), and seems to be
>> causing a regression for me w/ HiKey. With it, I'm getting crashes
>> with the bluetooth and wifi HALs.  Reverting this patch seems to
>> resolve the issue
>>
>> I suspect some other dependency from the original patchset is missing?
>
> If you use linux-next (or all of the patches in this series), does the
> problem go away?

I had tested awhile back the entire set from Todd, and didn't see this issue.

I'll try to find some time to give -next a spin, but it might not be today.

thanks
-john

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-24 21:07     ` John Stultz
@ 2017-07-25  9:13       ` Martijn Coenen
  2017-07-27  9:08         ` Amit Pundir
  0 siblings, 1 reply; 58+ messages in thread
From: Martijn Coenen @ 2017-07-25  9:13 UTC (permalink / raw)
  To: John Stultz
  Cc: Todd Kjos, Greg Kroah-Hartman, Arve Hjønnevåg, devel,
	Linux Kernel Mailing List, Todd Kjos

Hi John,

On Mon, Jul 24, 2017 at 11:07 PM, John Stultz <john.stultz@linaro.org> wrote:
>
> 12-31 16:00:36.632  2518  2584 E hw-ProcessState: Using /dev/hwbinder
> failed: unable to mmap transaction memory.

This doesn't look right. Is there anything in the kernel log?

> 12-31 16:00:36.632  2518  2566 D bt_hci  : hci_module_start_up
> starting async portion
> 12-31 16:00:36.632  2518  2584 E
> android.hardware.bluetooth@1.0::BluetoothHci: getService:
> defaultServiceManager()->getTransport returns
> Status(EX_TRANSACTION_FAILED): '-9 Bad file descriptor: '
> 12-31 16:00:36.633  2518  2584 F         :
> [1231/160036:FATAL:hci_layer_android.cc(109)] Check failed: btHci !=
> nullptr.
> 12-31 16:00:36.634  2518  2584 F libc    : Fatal signal 6 (SIGABRT),
> code -6 in tid 2584 (hci_thread)
> ...
> 12-31 16:00:38.027  2009  2061 E SupplicantStaIfaceHal: Exception
> while trying to register a listener for ISupplicant service:
> android.os.RemoteException: HwBinder Error: (-2147483648)
> 12-31 16:00:38.027  2009  2061 E WifiMonitor: startMonitoring(wlan0) failed!
> 12-31 16:00:38.028  2009  2061 E SupplicantStaIfaceHal: Can't call
> setDebugParams, ISupplicant is null
> 12-31 16:00:38.030  2009  2061 D WifiConfigStore: Reading from stores
> completed in 2 ms.
> 12-31 16:00:38.034  2009  2061 D WIFI    : Registering NetworkFactory
> 12-31 16:00:38.035  2009  2061 D WIFI_UT : Registering NetworkFactory
> 12-31 16:00:38.035  2009  2065 D ConnectivityService: Got
> NetworkFactory Messenger for WIFI
> 12-31 16:00:38.035  2009  2065 D ConnectivityService: Got
> NetworkFactory Messenger for WIFI_UT
> 12-31 16:00:38.037  2009  2061 D WifiConfigStore: Reading from user
> store completed in 2 ms.
> 12-31 16:00:38.055  2009  2061 D WifiConfigStore: Writing to stores
> completed in 17 ms.
> 12-31 16:00:38.055  2009  2061 E WifiStateMachine: Failed to setup
> control channel, restart supplicant
> ...
> etc.
>
> thanks
> -john

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-25  9:13       ` Martijn Coenen
@ 2017-07-27  9:08         ` Amit Pundir
  2017-07-27 13:23           ` Greg Kroah-Hartman
  0 siblings, 1 reply; 58+ messages in thread
From: Amit Pundir @ 2017-07-27  9:08 UTC (permalink / raw)
  To: Martijn Coenen
  Cc: John Stultz, Todd Kjos, Greg Kroah-Hartman,
	Arve Hjønnevåg, devel, Linux Kernel Mailing List,
	Todd Kjos

Hi,

On 25 July 2017 at 14:43, Martijn Coenen <maco@google.com> wrote:
> Hi John,
>
> On Mon, Jul 24, 2017 at 11:07 PM, John Stultz <john.stultz@linaro.org> wrote:
>>
>> 12-31 16:00:36.632  2518  2584 E hw-ProcessState: Using /dev/hwbinder
>> failed: unable to mmap transaction memory.
>
> This doesn't look right. Is there anything in the kernel log?

There is a slight difference in this patch and the one that got pushed
in android-4.9[1]. I cherry-picked those changes and got BT/WiFi
working again on Hikey with mainline tracking (4.13-rc2) tree.

Regards,
Amit Pundir
[1] https://android.googlesource.com/kernel/common/+/872c26eb0776ef160447d8703779e2bce0b7230a%5E%21/#F0

>
>> 12-31 16:00:36.632  2518  2566 D bt_hci  : hci_module_start_up
>> starting async portion
>> 12-31 16:00:36.632  2518  2584 E
>> android.hardware.bluetooth@1.0::BluetoothHci: getService:
>> defaultServiceManager()->getTransport returns
>> Status(EX_TRANSACTION_FAILED): '-9 Bad file descriptor: '
>> 12-31 16:00:36.633  2518  2584 F         :
>> [1231/160036:FATAL:hci_layer_android.cc(109)] Check failed: btHci !=
>> nullptr.
>> 12-31 16:00:36.634  2518  2584 F libc    : Fatal signal 6 (SIGABRT),
>> code -6 in tid 2584 (hci_thread)
>> ...
>> 12-31 16:00:38.027  2009  2061 E SupplicantStaIfaceHal: Exception
>> while trying to register a listener for ISupplicant service:
>> android.os.RemoteException: HwBinder Error: (-2147483648)
>> 12-31 16:00:38.027  2009  2061 E WifiMonitor: startMonitoring(wlan0) failed!
>> 12-31 16:00:38.028  2009  2061 E SupplicantStaIfaceHal: Can't call
>> setDebugParams, ISupplicant is null
>> 12-31 16:00:38.030  2009  2061 D WifiConfigStore: Reading from stores
>> completed in 2 ms.
>> 12-31 16:00:38.034  2009  2061 D WIFI    : Registering NetworkFactory
>> 12-31 16:00:38.035  2009  2061 D WIFI_UT : Registering NetworkFactory
>> 12-31 16:00:38.035  2009  2065 D ConnectivityService: Got
>> NetworkFactory Messenger for WIFI
>> 12-31 16:00:38.035  2009  2065 D ConnectivityService: Got
>> NetworkFactory Messenger for WIFI_UT
>> 12-31 16:00:38.037  2009  2061 D WifiConfigStore: Reading from user
>> store completed in 2 ms.
>> 12-31 16:00:38.055  2009  2061 D WifiConfigStore: Writing to stores
>> completed in 17 ms.
>> 12-31 16:00:38.055  2009  2061 E WifiStateMachine: Failed to setup
>> control channel, restart supplicant
>> ...
>> etc.
>>
>> thanks
>> -john

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-27  9:08         ` Amit Pundir
@ 2017-07-27 13:23           ` Greg Kroah-Hartman
  2017-07-27 13:40             ` Martijn Coenen
  2017-07-27 13:42             ` Amit Pundir
  0 siblings, 2 replies; 58+ messages in thread
From: Greg Kroah-Hartman @ 2017-07-27 13:23 UTC (permalink / raw)
  To: Amit Pundir
  Cc: Martijn Coenen, John Stultz, Todd Kjos, Arve Hjønnevåg,
	devel, Linux Kernel Mailing List, Todd Kjos

On Thu, Jul 27, 2017 at 02:38:30PM +0530, Amit Pundir wrote:
> Hi,
> 
> On 25 July 2017 at 14:43, Martijn Coenen <maco@google.com> wrote:
> > Hi John,
> >
> > On Mon, Jul 24, 2017 at 11:07 PM, John Stultz <john.stultz@linaro.org> wrote:
> >>
> >> 12-31 16:00:36.632  2518  2584 E hw-ProcessState: Using /dev/hwbinder
> >> failed: unable to mmap transaction memory.
> >
> > This doesn't look right. Is there anything in the kernel log?
> 
> There is a slight difference in this patch and the one that got pushed
> in android-4.9[1]. I cherry-picked those changes and got BT/WiFi
> working again on Hikey with mainline tracking (4.13-rc2) tree.
> 
> Regards,
> Amit Pundir
> [1] https://android.googlesource.com/kernel/common/+/872c26eb0776ef160447d8703779e2bce0b7230a%5E%21/#F0

And what is that difference?

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-27 13:23           ` Greg Kroah-Hartman
@ 2017-07-27 13:40             ` Martijn Coenen
  2017-07-27 13:42             ` Amit Pundir
  1 sibling, 0 replies; 58+ messages in thread
From: Martijn Coenen @ 2017-07-27 13:40 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Amit Pundir, John Stultz, Todd Kjos, Arve Hjønnevåg,
	devel, Linux Kernel Mailing List, Todd Kjos

Looks like this assignment somehow went missing in the upstream version:

proc->vma_vm_mm = current->group_leader->mm;

which probably causes us to bail out here in
binder_update_page_range() because proc->vma_vm_mm is NULL:

if (vma && mm != proc->vma_vm_mm) {
                        pr_err("%d: vma mm and task mm mismatch\n",
                                proc->pid);


I'll prep a patch to fix this ASAP.

On Thu, Jul 27, 2017 at 3:23 PM, Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:
> On Thu, Jul 27, 2017 at 02:38:30PM +0530, Amit Pundir wrote:
>> Hi,
>>
>> On 25 July 2017 at 14:43, Martijn Coenen <maco@google.com> wrote:
>> > Hi John,
>> >
>> > On Mon, Jul 24, 2017 at 11:07 PM, John Stultz <john.stultz@linaro.org> wrote:
>> >>
>> >> 12-31 16:00:36.632  2518  2584 E hw-ProcessState: Using /dev/hwbinder
>> >> failed: unable to mmap transaction memory.
>> >
>> > This doesn't look right. Is there anything in the kernel log?
>>
>> There is a slight difference in this patch and the one that got pushed
>> in android-4.9[1]. I cherry-picked those changes and got BT/WiFi
>> working again on Hikey with mainline tracking (4.13-rc2) tree.
>>
>> Regards,
>> Amit Pundir
>> [1] https://android.googlesource.com/kernel/common/+/872c26eb0776ef160447d8703779e2bce0b7230a%5E%21/#F0
>
> And what is that difference?

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-27 13:23           ` Greg Kroah-Hartman
  2017-07-27 13:40             ` Martijn Coenen
@ 2017-07-27 13:42             ` Amit Pundir
  2017-07-28 11:58               ` Martijn Coenen
  1 sibling, 1 reply; 58+ messages in thread
From: Amit Pundir @ 2017-07-27 13:42 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Martijn Coenen, John Stultz, Todd Kjos, Arve Hjønnevåg,
	devel, Linux Kernel Mailing List, Todd Kjos

On 27 July 2017 at 18:53, Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:
> On Thu, Jul 27, 2017 at 02:38:30PM +0530, Amit Pundir wrote:
>> Hi,
>>
>> On 25 July 2017 at 14:43, Martijn Coenen <maco@google.com> wrote:
>> > Hi John,
>> >
>> > On Mon, Jul 24, 2017 at 11:07 PM, John Stultz <john.stultz@linaro.org> wrote:
>> >>
>> >> 12-31 16:00:36.632  2518  2584 E hw-ProcessState: Using /dev/hwbinder
>> >> failed: unable to mmap transaction memory.
>> >
>> > This doesn't look right. Is there anything in the kernel log?
>>
>> There is a slight difference in this patch and the one that got pushed
>> in android-4.9[1]. I cherry-picked those changes and got BT/WiFi
>> working again on Hikey with mainline tracking (4.13-rc2) tree.
>>
>> Regards,
>> Amit Pundir
>> [1] https://android.googlesource.com/kernel/common/+/872c26eb0776ef160447d8703779e2bce0b7230a%5E%21/#F0
>
> And what is that difference?

These couple of lines of change:

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index f7665c31feca..d7291a5a13e1 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -3362,7 +3362,7 @@ static int binder_mmap(struct file *filp, struct
vm_area_struct *vma)
        const char *failure_string;
        struct binder_buffer *buffer;

-       if (proc->tsk != current)
+       if (proc->tsk != current->group_leader)
                return -EINVAL;

        if ((vma->vm_end - vma->vm_start) > SZ_4M)
@@ -3466,6 +3466,7 @@ static int binder_open(struct inode *nodp,
struct file *filp)
                return -ENOMEM;
        get_task_struct(current->group_leader);
        proc->tsk = current->group_leader;
+       proc->vma_vm_mm = current->group_leader->mm;
        INIT_LIST_HEAD(&proc->todo);
        init_waitqueue_head(&proc->wait);
        proc->default_priority = task_nice(current);

John reported hw_binder mmap failure in his logcat. So I'm assuming
that the check in binder_mmap is where the binder failed and returned.

Regards,
Amit Pundir

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/37] binder: use group leader instead of open thread
  2017-07-27 13:42             ` Amit Pundir
@ 2017-07-28 11:58               ` Martijn Coenen
  0 siblings, 0 replies; 58+ messages in thread
From: Martijn Coenen @ 2017-07-28 11:58 UTC (permalink / raw)
  To: Amit Pundir
  Cc: Greg Kroah-Hartman, John Stultz, Todd Kjos,
	Arve Hjønnevåg, devel, Linux Kernel Mailing List,
	Todd Kjos

I took the time to look at what else is different between common and
upstream, and just sent 3 patches to LKML to reconcile. One of the 3
fixes this particular problem.

On Thu, Jul 27, 2017 at 3:42 PM, Amit Pundir <amit.pundir@linaro.org> wrote:
> On 27 July 2017 at 18:53, Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:
>> On Thu, Jul 27, 2017 at 02:38:30PM +0530, Amit Pundir wrote:
>>> Hi,
>>>
>>> On 25 July 2017 at 14:43, Martijn Coenen <maco@google.com> wrote:
>>> > Hi John,
>>> >
>>> > On Mon, Jul 24, 2017 at 11:07 PM, John Stultz <john.stultz@linaro.org> wrote:
>>> >>
>>> >> 12-31 16:00:36.632  2518  2584 E hw-ProcessState: Using /dev/hwbinder
>>> >> failed: unable to mmap transaction memory.
>>> >
>>> > This doesn't look right. Is there anything in the kernel log?
>>>
>>> There is a slight difference in this patch and the one that got pushed
>>> in android-4.9[1]. I cherry-picked those changes and got BT/WiFi
>>> working again on Hikey with mainline tracking (4.13-rc2) tree.
>>>
>>> Regards,
>>> Amit Pundir
>>> [1] https://android.googlesource.com/kernel/common/+/872c26eb0776ef160447d8703779e2bce0b7230a%5E%21/#F0
>>
>> And what is that difference?
>
> These couple of lines of change:
>
> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> index f7665c31feca..d7291a5a13e1 100644
> --- a/drivers/android/binder.c
> +++ b/drivers/android/binder.c
> @@ -3362,7 +3362,7 @@ static int binder_mmap(struct file *filp, struct
> vm_area_struct *vma)
>         const char *failure_string;
>         struct binder_buffer *buffer;
>
> -       if (proc->tsk != current)
> +       if (proc->tsk != current->group_leader)
>                 return -EINVAL;
>
>         if ((vma->vm_end - vma->vm_start) > SZ_4M)
> @@ -3466,6 +3466,7 @@ static int binder_open(struct inode *nodp,
> struct file *filp)
>                 return -ENOMEM;
>         get_task_struct(current->group_leader);
>         proc->tsk = current->group_leader;
> +       proc->vma_vm_mm = current->group_leader->mm;
>         INIT_LIST_HEAD(&proc->todo);
>         init_waitqueue_head(&proc->wait);
>         proc->default_priority = task_nice(current);
>
> John reported hw_binder mmap failure in his logcat. So I'm assuming
> that the check in binder_mmap is where the binder failed and returned.
>
> Regards,
> Amit Pundir

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2017-07-28 11:58 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-29 19:01 [PATCH 00/37] fine-grained locking in binder driver Todd Kjos
2017-06-29 19:01 ` [PATCH 01/37] Revert "android: binder: Sanity check at binder ioctl" Todd Kjos
2017-07-03  9:17   ` Greg KH
     [not found]     ` <CAHRSSExh9JX5xiSRig55DSei31C_BPSasOKB+BTC6jjjuZ+ZpA@mail.gmail.com>
2017-07-05 18:47       ` Greg KH
2017-06-29 19:01 ` [PATCH 02/37] binder: use group leader instead of open thread Todd Kjos
2017-07-03  9:17   ` Greg KH
     [not found]     ` <CAHRSSEyH3t0igLJqcC4e-HR68RH0bg4T310jnRHZzrMChoOeOg@mail.gmail.com>
2017-07-05 18:47       ` Greg KH
2017-07-07 18:23         ` Todd Kjos
2017-07-07 18:29           ` Greg KH
2017-07-24 21:00   ` John Stultz
2017-07-24 21:07     ` John Stultz
2017-07-25  9:13       ` Martijn Coenen
2017-07-27  9:08         ` Amit Pundir
2017-07-27 13:23           ` Greg Kroah-Hartman
2017-07-27 13:40             ` Martijn Coenen
2017-07-27 13:42             ` Amit Pundir
2017-07-28 11:58               ` Martijn Coenen
2017-07-24 21:23     ` Greg Kroah-Hartman
2017-07-24 21:53       ` John Stultz
2017-06-29 19:01 ` [PATCH 03/37] binder: Use wake up hint for synchronous transactions Todd Kjos
2017-07-03  9:18   ` Greg KH
2017-06-29 19:01 ` [PATCH 04/37] binder: separate binder allocator structure from binder proc Todd Kjos
2017-06-29 19:01 ` [PATCH 05/37] binder: remove unneeded cleanup code Todd Kjos
2017-06-29 19:01 ` [PATCH 06/37] binder: separate out binder_alloc functions Todd Kjos
2017-06-29 19:01 ` [PATCH 07/37] binder: move binder_alloc to separate file Todd Kjos
2017-06-29 19:01 ` [PATCH 08/37] binder: remove binder_debug_no_lock mechanism Todd Kjos
2017-06-29 19:01 ` [PATCH 09/37] binder: add protection for non-perf cases Todd Kjos
2017-06-29 19:01 ` [PATCH 10/37] binder: change binder_stats to atomics Todd Kjos
2017-06-29 19:01 ` [PATCH 11/37] binder: make binder_last_id an atomic Todd Kjos
2017-06-29 19:01 ` [PATCH 12/37] binder: add log information for binder transaction failures Todd Kjos
2017-06-29 19:01 ` [PATCH 13/37] binder: refactor queue management in binder_thread_read Todd Kjos
2017-06-29 19:01 ` [PATCH 14/37] binder: avoid race conditions when enqueuing txn Todd Kjos
2017-06-29 19:01 ` [PATCH 15/37] binder: don't modify thread->looper from other threads Todd Kjos
2017-06-29 19:01 ` [PATCH 16/37] binder: remove dead code in binder_get_ref_for_node Todd Kjos
2017-06-29 19:01 ` [PATCH 17/37] binder: protect against two threads freeing buffer Todd Kjos
2017-06-29 19:01 ` [PATCH 18/37] binder: add more debug info when allocation fails Todd Kjos
2017-06-29 19:01 ` [PATCH 19/37] binder: use atomic for transaction_log index Todd Kjos
2017-06-29 19:01 ` [PATCH 20/37] binder: refactor binder_pop_transaction Todd Kjos
2017-06-29 19:01 ` [PATCH 21/37] binder: guarantee txn complete / errors delivered in-order Todd Kjos
2017-06-29 19:01 ` [PATCH 22/37] binder: make sure target_node has strong ref Todd Kjos
2017-06-29 19:01 ` [PATCH 23/37] binder: make sure accesses to proc/thread are safe Todd Kjos
2017-06-29 19:01 ` [PATCH 24/37] binder: refactor binder ref inc/dec for thread safety Todd Kjos
2017-06-29 19:01 ` [PATCH 25/37] binder: use node->tmp_refs to ensure node safety Todd Kjos
2017-06-29 19:02 ` [PATCH 26/37] binder: introduce locking helper functions Todd Kjos
2017-06-29 19:02 ` [PATCH 27/37] binder: use inner lock to sync work dq and node counts Todd Kjos
2017-06-29 19:02 ` [PATCH 28/37] binder: add spinlocks to protect todo lists Todd Kjos
2017-06-29 19:02 ` [PATCH 29/37] binder: add spinlock to protect binder_node Todd Kjos
2017-06-29 19:02 ` [PATCH 30/37] binder: protect proc->nodes with inner lock Todd Kjos
2017-06-29 19:02 ` [PATCH 31/37] binder: protect proc->threads with inner_lock Todd Kjos
2017-06-29 19:02 ` [PATCH 32/37] binder: protect transaction_stack with inner lock Todd Kjos
2017-06-29 19:02 ` [PATCH 33/37] binder: use inner lock to protect thread accounting Todd Kjos
2017-06-29 19:02 ` [PATCH 34/37] binder: protect binder_ref with outer lock Todd Kjos
2017-06-29 19:02 ` [PATCH 35/37] binder: protect against stale pointers in print_binder_transaction Todd Kjos
2017-06-29 19:02 ` [PATCH 36/37] binder: fix death race conditions Todd Kjos
2017-06-30  6:05   ` Greg KH
2017-06-29 19:02 ` [PATCH 37/37] binder: remove global binder lock Todd Kjos
2017-06-30  6:04 ` [PATCH 00/37] fine-grained locking in binder driver Greg KH
2017-07-17 12:49   ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).