From: Jiri Olsa <jolsa@redhat.com> To: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org>, Peter Zijlstra <a.p.zijlstra@chello.nl>, Arnaldo Carvalho de Melo <acme@kernel.org>, lkml <linux-kernel@vger.kernel.org>, Ingo Molnar <mingo@kernel.org>, Alexander Shishkin <alexander.shishkin@linux.intel.com>, Michael Petlan <mpetlan@redhat.com>, Wade Mealing <wmealing@redhat.com> Subject: Re: [PATCH] perf: Fix race in perf_mmap_close function Date: Fri, 11 Sep 2020 09:49:31 +0200 Message-ID: <20200911074931.GA1714160@krava> (raw) In-Reply-To: <CAM9d7ciEAA_3Quo1-q7hU=Te+hBgJ2wYAjbDazXd7yS70HrhPA@mail.gmail.com> On Fri, Sep 11, 2020 at 12:05:10PM +0900, Namhyung Kim wrote: > Hi Jiri, > > On Thu, Sep 10, 2020 at 11:50 PM Jiri Olsa <jolsa@redhat.com> wrote: > > > > On Thu, Sep 10, 2020 at 10:48:02PM +0900, Namhyung Kim wrote: > > > > SNIP > > > > > > _do_fork+0x83/0x3a0 > > > > __do_sys_wait4+0x83/0x90 > > > > __do_sys_clone+0x85/0xa0 > > > > do_syscall_64+0x5b/0x1e0 > > > > entry_SYSCALL_64_after_hwframe+0x44/0xa9 > > > > > > > > Using atomic decrease and check instead of separated calls. > > > > This fixes CVE-2020-14351. > > > > > > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > > > > --- > > > > kernel/events/core.c | 4 +--- > > > > 1 file changed, 1 insertion(+), 3 deletions(-) > > > > > > > > diff --git a/kernel/events/core.c b/kernel/events/core.c > > > > index 7ed5248f0445..29313cc54d9e 100644 > > > > --- a/kernel/events/core.c > > > > +++ b/kernel/events/core.c > > > > @@ -5903,8 +5903,6 @@ static void perf_mmap_close(struct vm_area_struct *vma) > > > > mutex_unlock(&event->mmap_mutex); > > > > } > > > > > > > > - atomic_dec(&rb->mmap_count); > > > > - > > > > if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) > > > > goto out_put; > > > > > > But when it takes the goto, rb->mmap_count won't decrement anymore.. > > > > event->mmap_count is per event, so if we have have race in here, > > 2 threads can go through with each event->mmap_count reaching zero > > Maybe I'm missing something. > > But as far as I can see, perf_mmap_close() always decremented both > rb->mmap_count and event->mmap_count. But with this change, > it seems not decrement rb->mmap_count when event->mmap_count > doesn't go to zero, right? ugh, that's right.. how about change below jirka --- diff --git a/kernel/events/core.c b/kernel/events/core.c index 7ed5248f0445..8ab2400aef55 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5868,11 +5868,11 @@ static void perf_pmu_output_stop(struct perf_event *event); static void perf_mmap_close(struct vm_area_struct *vma) { struct perf_event *event = vma->vm_file->private_data; - struct perf_buffer *rb = ring_buffer_get(event); struct user_struct *mmap_user = rb->mmap_user; int mmap_locked = rb->mmap_locked; unsigned long size = perf_data_size(rb); + bool detach_rest = false; if (event->pmu->event_unmapped) event->pmu->event_unmapped(event, vma->vm_mm); @@ -5903,7 +5903,8 @@ static void perf_mmap_close(struct vm_area_struct *vma) mutex_unlock(&event->mmap_mutex); } - atomic_dec(&rb->mmap_count); + if (atomic_dec_and_test(&rb->mmap_count)) + detach_rest = true; if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) goto out_put; @@ -5912,7 +5913,7 @@ static void perf_mmap_close(struct vm_area_struct *vma) mutex_unlock(&event->mmap_mutex); /* If there's still other mmap()s of this buffer, we're done. */ - if (atomic_read(&rb->mmap_count)) + if (!detach_rest) goto out_put; /*
next prev parent reply index Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-09-10 10:41 Jiri Olsa 2020-09-10 13:48 ` Namhyung Kim 2020-09-10 14:47 ` Jiri Olsa 2020-09-11 3:05 ` Namhyung Kim 2020-09-11 7:49 ` Jiri Olsa [this message] 2020-09-14 12:48 ` Namhyung Kim 2020-09-14 20:59 ` Jiri Olsa 2020-09-15 15:35 ` Michael Petlan 2020-09-16 11:53 ` [PATCHv2] " Jiri Olsa 2020-09-16 13:54 ` peterz 2020-09-16 14:38 ` Jiri Olsa 2020-09-16 14:05 ` peterz 2020-10-12 11:45 ` [tip: perf/core] perf/core: Fix race in the perf_mmap_close() function tip-bot2 for Jiri Olsa
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200911074931.GA1714160@krava \ --to=jolsa@redhat.com \ --cc=a.p.zijlstra@chello.nl \ --cc=acme@kernel.org \ --cc=alexander.shishkin@linux.intel.com \ --cc=jolsa@kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mingo@kernel.org \ --cc=mpetlan@redhat.com \ --cc=namhyung@kernel.org \ --cc=wmealing@redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
LKML Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git git clone --mirror https://lore.kernel.org/lkml/8 lkml/git/8.git git clone --mirror https://lore.kernel.org/lkml/9 lkml/git/9.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \ linux-kernel@vger.kernel.org public-inbox-index lkml Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel AGPL code for this site: git clone https://public-inbox.org/public-inbox.git