From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7FAAC43461 for ; Wed, 16 Sep 2020 15:51:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6AC4022453 for ; Wed, 16 Sep 2020 15:51:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dDAaYbUJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726330AbgIPPuZ (ORCPT ); Wed, 16 Sep 2020 11:50:25 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:55636 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726302AbgIPPaX (ORCPT ); Wed, 16 Sep 2020 11:30:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600270165; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=tmQEYiBiBySNpeD1ctXfoXn/qnie+PItxZtSAJOkXMs=; b=dDAaYbUJjMtZ5OZV0VS1xtjMoOoWkaAn+l/oQbMAxpHFuLjowL16t+qyxwUGay6NqRSiOD AyL9bCSw5Z/sdC1jK8x3vrsSjD6EU70hTso7E2wgy8dhjFwYsuJpxZbB9IeUq7NQIvRQ65 ZMYvQ/wDPuvrJYBI1eSG7Rzu75JbzDQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-428-mZ4HGsn7Opuj1-LPSRsrIQ-1; Wed, 16 Sep 2020 07:53:16 -0400 X-MC-Unique: mZ4HGsn7Opuj1-LPSRsrIQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EB805EA1C0; Wed, 16 Sep 2020 11:53:14 +0000 (UTC) Received: from krava (ovpn-114-172.ams2.redhat.com [10.36.114.172]) by smtp.corp.redhat.com (Postfix) with SMTP id E57F519D6C; Wed, 16 Sep 2020 11:53:12 +0000 (UTC) Date: Wed, 16 Sep 2020 13:53:11 +0200 From: Jiri Olsa To: Peter Zijlstra Cc: Namhyung Kim , Jiri Olsa , Arnaldo Carvalho de Melo , lkml , Ingo Molnar , Alexander Shishkin , Wade Mealing Subject: [PATCHv2] perf: Fix race in perf_mmap_close function Message-ID: <20200916115311.GE2301783@krava> References: <20200910104153.1672460-1-jolsa@kernel.org> <20200910144744.GA1663813@krava> <20200911074931.GA1714160@krava> <20200914205936.GD1714160@krava> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There's a possible race in perf_mmap_close when checking ring buffer's mmap_count refcount value. The problem is that the mmap_count check is not atomic because we call atomic_dec and atomic_read separately. perf_mmap_close: ... atomic_dec(&rb->mmap_count); ... if (atomic_read(&rb->mmap_count)) goto out_put; free_uid out_put: ring_buffer_put(rb); /* could be last */ The race can happen when we have two (or more) events sharing same ring buffer and they go through atomic_dec and then they both see 0 as refcount value later in atomic_read. Then both will go on and execute code which is meant to be run just once. The code that detaches ring buffer is probably fine to be executed more than once, but the problem is in calling free_uid, which will later on demonstrate in related crashes and refcount warnings, like: refcount_t: addition on 0; use-after-free. ... RIP: 0010:refcount_warn_saturate+0x6d/0xf ... Call Trace: prepare_creds+0x190/0x1e0 copy_creds+0x35/0x172 copy_process+0x471/0x1a80 _do_fork+0x83/0x3a0 __do_sys_wait4+0x83/0x90 __do_sys_clone+0x85/0xa0 do_syscall_64+0x5b/0x1e0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Using atomic decrease and check instead of separated calls. This fixes CVE-2020-14351. Acked-by: Namhyung Kim Tested-by: Michael Petlan Signed-off-by: Jiri Olsa --- kernel/events/core.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 7ed5248f0445..8ab2400aef55 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5868,11 +5868,11 @@ static void perf_pmu_output_stop(struct perf_event *event); static void perf_mmap_close(struct vm_area_struct *vma) { struct perf_event *event = vma->vm_file->private_data; - struct perf_buffer *rb = ring_buffer_get(event); struct user_struct *mmap_user = rb->mmap_user; int mmap_locked = rb->mmap_locked; unsigned long size = perf_data_size(rb); + bool detach_rest = false; if (event->pmu->event_unmapped) event->pmu->event_unmapped(event, vma->vm_mm); @@ -5903,7 +5903,8 @@ static void perf_mmap_close(struct vm_area_struct *vma) mutex_unlock(&event->mmap_mutex); } - atomic_dec(&rb->mmap_count); + if (atomic_dec_and_test(&rb->mmap_count)) + detach_rest = true; if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) goto out_put; @@ -5912,7 +5913,7 @@ static void perf_mmap_close(struct vm_area_struct *vma) mutex_unlock(&event->mmap_mutex); /* If there's still other mmap()s of this buffer, we're done. */ - if (atomic_read(&rb->mmap_count)) + if (!detach_rest) goto out_put; /* -- 2.26.2