All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf] bpf/tracing: fix a deadlock in perf_event_detach_bpf_prog
@ 2018-04-09 16:18 Yonghong Song
  2018-04-09 16:47 ` Alexei Starovoitov
  0 siblings, 1 reply; 4+ messages in thread
From: Yonghong Song @ 2018-04-09 16:18 UTC (permalink / raw)
  To: ast, daniel, netdev; +Cc: kernel-team

syzbot reported a possible deadlock in perf_event_detach_bpf_prog.
The error details:
  ======================================================
  WARNING: possible circular locking dependency detected
  4.16.0-rc7+ #3 Not tainted
  ------------------------------------------------------
  syz-executor7/24531 is trying to acquire lock:
   (bpf_event_mutex){+.+.}, at: [<000000008a849b07>] perf_event_detach_bpf_prog+0x92/0x3d0 kernel/trace/bpf_trace.c:854

  but task is already holding lock:
   (&mm->mmap_sem){++++}, at: [<0000000038768f87>] vm_mmap_pgoff+0x198/0x280 mm/util.c:353

  which lock already depends on the new lock.

  the existing dependency chain (in reverse order) is:

  -> #1 (&mm->mmap_sem){++++}:
       __might_fault+0x13a/0x1d0 mm/memory.c:4571
       _copy_to_user+0x2c/0xc0 lib/usercopy.c:25
       copy_to_user include/linux/uaccess.h:155 [inline]
       bpf_prog_array_copy_info+0xf2/0x1c0 kernel/bpf/core.c:1694
       perf_event_query_prog_array+0x1c7/0x2c0 kernel/trace/bpf_trace.c:891
       _perf_ioctl kernel/events/core.c:4750 [inline]
       perf_ioctl+0x3e1/0x1480 kernel/events/core.c:4770
       vfs_ioctl fs/ioctl.c:46 [inline]
       do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:686
       SYSC_ioctl fs/ioctl.c:701 [inline]
       SyS_ioctl+0x8f/0xc0 fs/ioctl.c:692
       do_syscall_64+0x281/0x940 arch/x86/entry/common.c:287
       entry_SYSCALL_64_after_hwframe+0x42/0xb7

  -> #0 (bpf_event_mutex){+.+.}:
       lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:3920
       __mutex_lock_common kernel/locking/mutex.c:756 [inline]
       __mutex_lock+0x16f/0x1a80 kernel/locking/mutex.c:893
       mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
       perf_event_detach_bpf_prog+0x92/0x3d0 kernel/trace/bpf_trace.c:854
       perf_event_free_bpf_prog kernel/events/core.c:8147 [inline]
       _free_event+0xbdb/0x10f0 kernel/events/core.c:4116
       put_event+0x24/0x30 kernel/events/core.c:4204
       perf_mmap_close+0x60d/0x1010 kernel/events/core.c:5172
       remove_vma+0xb4/0x1b0 mm/mmap.c:172
       remove_vma_list mm/mmap.c:2490 [inline]
       do_munmap+0x82a/0xdf0 mm/mmap.c:2731
       mmap_region+0x59e/0x15a0 mm/mmap.c:1646
       do_mmap+0x6c0/0xe00 mm/mmap.c:1483
       do_mmap_pgoff include/linux/mm.h:2223 [inline]
       vm_mmap_pgoff+0x1de/0x280 mm/util.c:355
       SYSC_mmap_pgoff mm/mmap.c:1533 [inline]
       SyS_mmap_pgoff+0x462/0x5f0 mm/mmap.c:1491
       SYSC_mmap arch/x86/kernel/sys_x86_64.c:100 [inline]
       SyS_mmap+0x16/0x20 arch/x86/kernel/sys_x86_64.c:91
       do_syscall_64+0x281/0x940 arch/x86/entry/common.c:287
       entry_SYSCALL_64_after_hwframe+0x42/0xb7

  other info that might help us debug this:

   Possible unsafe locking scenario:

         CPU0                    CPU1
         ----                    ----
    lock(&mm->mmap_sem);
                                 lock(bpf_event_mutex);
                                 lock(&mm->mmap_sem);
    lock(bpf_event_mutex);

   *** DEADLOCK ***
  ======================================================

The bug is introduced by Commit f371b304f12e ("bpf/tracing: allow
user space to query prog array on the same tp") where copy_to_user,
which requires mm->mmap_sem, is called inside bpf_event_mutex lock.
At the same time, during perf_event file descriptor close,
mm->mmap_sem is held first and then subsequent
perf_event_detach_bpf_prog needs bpf_event_mutex lock.
Such a senario caused a deadlock.

As suggested by Danial, moving copy_to_user out of the
bpf_event_mutex lock should fix the problem.

Fixes: f371b304f12e ("bpf/tracing: allow user space to query prog array on the same tp")
Reported-by: syzbot+dc5ca0e4c9bfafaf2bae@syzkaller.appspotmail.com
Signed-off-by: Yonghong Song <yhs@fb.com>
---
 include/linux/bpf.h      |  4 ++--
 kernel/bpf/core.c        | 25 +++++++++++++++++++------
 kernel/trace/bpf_trace.c | 24 ++++++++++++++++++++----
 3 files changed, 41 insertions(+), 12 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 95a7abd..486e65e 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -339,8 +339,8 @@ int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *progs,
 void bpf_prog_array_delete_safe(struct bpf_prog_array __rcu *progs,
 				struct bpf_prog *old_prog);
 int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array,
-			     __u32 __user *prog_ids, u32 request_cnt,
-			     __u32 __user *prog_cnt);
+			     u32 *prog_ids, u32 request_cnt,
+			     u32 *prog_cnt);
 int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
 			struct bpf_prog *exclude_prog,
 			struct bpf_prog *include_prog,
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index d315b39..a95a7de 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1683,22 +1683,35 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
 }
 
 int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array,
-			     __u32 __user *prog_ids, u32 request_cnt,
-			     __u32 __user *prog_cnt)
+			     u32 *prog_ids, u32 request_cnt,
+			     u32 *prog_cnt)
 {
-	u32 cnt = 0;
+	struct bpf_prog **prog;
+	u32 i = 0, cnt = 0;
 
 	if (array)
 		cnt = bpf_prog_array_length(array);
 
-	if (copy_to_user(prog_cnt, &cnt, sizeof(cnt)))
-		return -EFAULT;
+	*prog_cnt = cnt;
 
 	/* return early if user requested only program count or nothing to copy */
 	if (!request_cnt || !cnt)
 		return 0;
 
-	return bpf_prog_array_copy_to_user(array, prog_ids, request_cnt);
+	/* this function is called under trace/bpf_trace.c: bpf_event_mutex */
+	prog = rcu_dereference_check(array, 1)->progs;
+	for (; *prog; prog++) {
+		if (*prog == &dummy_bpf_prog.prog)
+			continue;
+		prog_ids[i] = (*prog)->aux->id;
+		if (++i == request_cnt) {
+			prog++;
+			break;
+		}
+	}
+	if (!!(*prog))
+		return -ENOSPC;
+	return 0;
 }
 
 static void bpf_prog_free_deferred(struct work_struct *work)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index d88e96d..bd1e458 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -977,6 +977,7 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 {
 	struct perf_event_query_bpf __user *uquery = info;
 	struct perf_event_query_bpf query = {};
+	u32 *ids, prog_cnt, ids_len;
 	int ret;
 
 	if (!capable(CAP_SYS_ADMIN))
@@ -985,16 +986,31 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 		return -EINVAL;
 	if (copy_from_user(&query, uquery, sizeof(query)))
 		return -EFAULT;
-	if (query.ids_len > BPF_TRACE_MAX_PROGS)
+
+	ids_len = query.ids_len;
+	if (ids_len > BPF_TRACE_MAX_PROGS)
 		return -E2BIG;
+	ids = kcalloc(ids_len, sizeof(u32), GFP_USER | __GFP_NOWARN);
+	if (!ids)
+		return -ENOMEM;
 
 	mutex_lock(&bpf_event_mutex);
 	ret = bpf_prog_array_copy_info(event->tp_event->prog_array,
-				       uquery->ids,
-				       query.ids_len,
-				       &uquery->prog_cnt);
+				       ids,
+				       ids_len,
+				       &prog_cnt);
 	mutex_unlock(&bpf_event_mutex);
 
+	if (!ret || ret == -ENOSPC) {
+		if (copy_to_user(&uquery->prog_cnt, &prog_cnt, sizeof(prog_cnt)) ||
+		    copy_to_user(uquery->ids, ids, ids_len * sizeof(u32))) {
+			ret = -EFAULT;
+			goto out;
+		}
+	}
+
+out:
+	kfree(ids);
 	return ret;
 }
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] bpf/tracing: fix a deadlock in perf_event_detach_bpf_prog
  2018-04-09 16:18 [PATCH bpf] bpf/tracing: fix a deadlock in perf_event_detach_bpf_prog Yonghong Song
@ 2018-04-09 16:47 ` Alexei Starovoitov
  2018-04-09 18:41   ` Yonghong Song
  0 siblings, 1 reply; 4+ messages in thread
From: Alexei Starovoitov @ 2018-04-09 16:47 UTC (permalink / raw)
  To: Yonghong Song, daniel, netdev; +Cc: kernel-team

On 4/9/18 9:18 AM, Yonghong Song wrote:
> syzbot reported a possible deadlock in perf_event_detach_bpf_prog.
...
> @@ -985,16 +986,31 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
>  		return -EINVAL;
>  	if (copy_from_user(&query, uquery, sizeof(query)))
>  		return -EFAULT;
> -	if (query.ids_len > BPF_TRACE_MAX_PROGS)
> +
> +	ids_len = query.ids_len;
> +	if (ids_len > BPF_TRACE_MAX_PROGS)
>  		return -E2BIG;
> +	ids = kcalloc(ids_len, sizeof(u32), GFP_USER | __GFP_NOWARN);
> +	if (!ids)
> +		return -ENOMEM;
>
>  	mutex_lock(&bpf_event_mutex);
>  	ret = bpf_prog_array_copy_info(event->tp_event->prog_array,
> -				       uquery->ids,
> -				       query.ids_len,
> -				       &uquery->prog_cnt);
> +				       ids,
> +				       ids_len,
> +				       &prog_cnt);
>  	mutex_unlock(&bpf_event_mutex);
>
> +	if (!ret || ret == -ENOSPC) {
> +		if (copy_to_user(&uquery->prog_cnt, &prog_cnt, sizeof(prog_cnt)) ||
> +		    copy_to_user(uquery->ids, ids, ids_len * sizeof(u32))) {
> +			ret = -EFAULT;
> +			goto out;
> +		}
> +	}
> +
> +out:
> +	kfree(ids);

alloc/free just to avoid this locking dependency feels suboptimal.

may be we can get rid of bpf_event_mutex in some cases?
the perf event itself is locked via perf_event_ctx_lock() when we're
calling perf_event_query_prog_array, perf_event_attach|detach_bpf_prog.
I forgot what was the motivation for us to introduce it in the
first place.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] bpf/tracing: fix a deadlock in perf_event_detach_bpf_prog
  2018-04-09 16:47 ` Alexei Starovoitov
@ 2018-04-09 18:41   ` Yonghong Song
  2018-04-10  0:51     ` Alexei Starovoitov
  0 siblings, 1 reply; 4+ messages in thread
From: Yonghong Song @ 2018-04-09 18:41 UTC (permalink / raw)
  To: Alexei Starovoitov, daniel, netdev; +Cc: kernel-team



On 4/9/18 9:47 AM, Alexei Starovoitov wrote:
> On 4/9/18 9:18 AM, Yonghong Song wrote:
>> syzbot reported a possible deadlock in perf_event_detach_bpf_prog.
> ...
>> @@ -985,16 +986,31 @@ int perf_event_query_prog_array(struct 
>> perf_event *event, void __user *info)
>>          return -EINVAL;
>>      if (copy_from_user(&query, uquery, sizeof(query)))
>>          return -EFAULT;
>> -    if (query.ids_len > BPF_TRACE_MAX_PROGS)
>> +
>> +    ids_len = query.ids_len;
>> +    if (ids_len > BPF_TRACE_MAX_PROGS)
>>          return -E2BIG;
>> +    ids = kcalloc(ids_len, sizeof(u32), GFP_USER | __GFP_NOWARN);
>> +    if (!ids)
>> +        return -ENOMEM;
>>
>>      mutex_lock(&bpf_event_mutex);
>>      ret = bpf_prog_array_copy_info(event->tp_event->prog_array,
>> -                       uquery->ids,
>> -                       query.ids_len,
>> -                       &uquery->prog_cnt);
>> +                       ids,
>> +                       ids_len,
>> +                       &prog_cnt);
>>      mutex_unlock(&bpf_event_mutex);
>>
>> +    if (!ret || ret == -ENOSPC) {
>> +        if (copy_to_user(&uquery->prog_cnt, &prog_cnt, 
>> sizeof(prog_cnt)) ||
>> +            copy_to_user(uquery->ids, ids, ids_len * sizeof(u32))) {
>> +            ret = -EFAULT;
>> +            goto out;
>> +        }
>> +    }
>> +
>> +out:
>> +    kfree(ids);
> 
> alloc/free just to avoid this locking dependency feels suboptimal.

We actually already did kcalloc/kfree in bpf_prog_array_copy_to_user.
In that function, we did not copy_to_user one id at a time.
We allocate a temporary array and store the result there
and at the end, we call one copy_to_user to copy to the user buffer.

The patch here just moved this allocation and associated copy_to_user
out of the function and bpf_event_mutex. It did not introduce new
allocations.

> 
> may be we can get rid of bpf_event_mutex in some cases?
> the perf event itself is locked via perf_event_ctx_lock() when we're
> calling perf_event_query_prog_array, perf_event_attach|detach_bpf_prog.
> I forgot what was the motivation for us to introduce it in the
> first place.

The original motivation for the lock to make sure bpf_prog_array
does not change during middle of attach/detach/query. it looks like
we have:
    . perf_event_attach|query under perf_event_ctx_lock
    . perf_event_detach not under perf_event_ctx_lock
Introducing perf_event_ctx_lock to perf_event_detach could still
have the deadlock.

> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] bpf/tracing: fix a deadlock in perf_event_detach_bpf_prog
  2018-04-09 18:41   ` Yonghong Song
@ 2018-04-10  0:51     ` Alexei Starovoitov
  0 siblings, 0 replies; 4+ messages in thread
From: Alexei Starovoitov @ 2018-04-10  0:51 UTC (permalink / raw)
  To: Yonghong Song, daniel, netdev; +Cc: kernel-team

On 4/9/18 11:41 AM, Yonghong Song wrote:
>
>
> On 4/9/18 9:47 AM, Alexei Starovoitov wrote:
>> On 4/9/18 9:18 AM, Yonghong Song wrote:
>>> syzbot reported a possible deadlock in perf_event_detach_bpf_prog.
>> ...
>>> @@ -985,16 +986,31 @@ int perf_event_query_prog_array(struct
>>> perf_event *event, void __user *info)
>>>          return -EINVAL;
>>>      if (copy_from_user(&query, uquery, sizeof(query)))
>>>          return -EFAULT;
>>> -    if (query.ids_len > BPF_TRACE_MAX_PROGS)
>>> +
>>> +    ids_len = query.ids_len;
>>> +    if (ids_len > BPF_TRACE_MAX_PROGS)
>>>          return -E2BIG;
>>> +    ids = kcalloc(ids_len, sizeof(u32), GFP_USER | __GFP_NOWARN);
>>> +    if (!ids)
>>> +        return -ENOMEM;
>>>
>>>      mutex_lock(&bpf_event_mutex);
>>>      ret = bpf_prog_array_copy_info(event->tp_event->prog_array,
>>> -                       uquery->ids,
>>> -                       query.ids_len,
>>> -                       &uquery->prog_cnt);
>>> +                       ids,
>>> +                       ids_len,
>>> +                       &prog_cnt);
>>>      mutex_unlock(&bpf_event_mutex);
>>>
>>> +    if (!ret || ret == -ENOSPC) {
>>> +        if (copy_to_user(&uquery->prog_cnt, &prog_cnt,
>>> sizeof(prog_cnt)) ||
>>> +            copy_to_user(uquery->ids, ids, ids_len * sizeof(u32))) {
>>> +            ret = -EFAULT;
>>> +            goto out;
>>> +        }
>>> +    }
>>> +
>>> +out:
>>> +    kfree(ids);
>>
>> alloc/free just to avoid this locking dependency feels suboptimal.
>
> We actually already did kcalloc/kfree in bpf_prog_array_copy_to_user.
> In that function, we did not copy_to_user one id at a time.
> We allocate a temporary array and store the result there
> and at the end, we call one copy_to_user to copy to the user buffer.
>
> The patch here just moved this allocation and associated copy_to_user
> out of the function and bpf_event_mutex. It did not introduce new
> allocations.

I see, so the patch essentially open coding
bpf_prog_array_copy_to_user()
can we share the code then?
bpf/core.c callsite used by trace/bpf_trace.c
and similar callsite in bpf/cgroup.c
should be using common helper.


>>
>> may be we can get rid of bpf_event_mutex in some cases?
>> the perf event itself is locked via perf_event_ctx_lock() when we're
>> calling perf_event_query_prog_array, perf_event_attach|detach_bpf_prog.
>> I forgot what was the motivation for us to introduce it in the
>> first place.
>
> The original motivation for the lock to make sure bpf_prog_array
> does not change during middle of attach/detach/query. it looks like
> we have:
>    . perf_event_attach|query under perf_event_ctx_lock
>    . perf_event_detach not under perf_event_ctx_lock
> Introducing perf_event_ctx_lock to perf_event_detach could still
> have the deadlock.

ahh, right, since the progs are in even->tp_event which can
be shared by multiple perf_events.
Scratch that idea.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-04-10  0:51 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-09 16:18 [PATCH bpf] bpf/tracing: fix a deadlock in perf_event_detach_bpf_prog Yonghong Song
2018-04-09 16:47 ` Alexei Starovoitov
2018-04-09 18:41   ` Yonghong Song
2018-04-10  0:51     ` Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.