From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAC02C64E7A for ; Tue, 1 Dec 2020 20:32:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DEB8F21D42 for ; Tue, 1 Dec 2020 20:32:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="sRtSCv85" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DEB8F21D42 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A4D616B0036; Tue, 1 Dec 2020 15:32:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FE896B005D; Tue, 1 Dec 2020 15:32:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 915028D0001; Tue, 1 Dec 2020 15:32:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id 789766B0036 for ; Tue, 1 Dec 2020 15:32:56 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 40114247F for ; Tue, 1 Dec 2020 20:32:56 +0000 (UTC) X-FDA: 77545862352.06.pie34_0614812273ad Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 1E9A410034FE3 for ; Tue, 1 Dec 2020 20:32:56 +0000 (UTC) X-HE-Tag: pie34_0614812273ad X-Filterd-Recvd-Size: 10138 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Dec 2020 20:32:55 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id j138so3776320ybg.20 for ; Tue, 01 Dec 2020 12:32:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:message-id:mime-version:subject:from:to:cc; bh=ZqJGffJqojVDywd5iA+saKYtaK0ksB5X2p2c7/wMo68=; b=sRtSCv85U3mgoHTCCKuIe3IydVP5JCqhrlNOpmTHf18GDUJQGfjjJoS0BaIGErYKs0 a8mGMrBK4gcH8us++/YmgyvvIqkNJdWLhMvKfDeNlZV23iYo1x0/rxegaGCp29Fmq81q oPWZvMO88lTpddhb9ZeGGDD2XtLENKrw6+B0xR35abxe/X2L5rk/+dgI/j0Fk4pJttUw Oho+f4HJWpAis/D0R6i/EfHxxV1CcsXmBdT0Uru08e47UuFQG260Ed10HubOTOdwqvmS PHMFqirXeL0h0lf6V/UlTJ7u7M1V1Ki6t8zr/OcexcA0olByfRwqlJ/yHd1uIyz+WaTN xJdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:message-id:mime-version:subject:from :to:cc; bh=ZqJGffJqojVDywd5iA+saKYtaK0ksB5X2p2c7/wMo68=; b=TrlG5Og1DMV/L8UvWM7c/zjAqgLXt81rDAKfnQbnMrDWM8m6XCLDK+wk0Q5c6rn/EE xrE4ip+zxqut0RiuVbWGm+wqkSl2+Tsb0jGS4cEJ8OHYDa/zUaHqFVE2SfD97ayEeiQz 7i8lnoWILKT1dvGR15voTrOljrZhvxAULv1cvd9O8eetzwX4K3K+/+k8YTpovGaOlRz3 +4v8yXUz21iY7D6iNoSBA7JYrkMtSH22axbMVR9x4YiWFB049URd4JObzHU04rZmNlCE TMeMq6dmtGe2AA6l7C5xpNnNNvXS+QwaW45zqV1Sd+M1ywAa3zOaErsVX/hRZdBVDdu/ 8AbA== X-Gm-Message-State: AOAM533/iMfKlyP5LgncLy24eufJoLnI3fy5DUdJCPr7DG/SCSoezkVs /dWF7YPjy43ej073a3rPVzlGmnLGg/rgvK2avr4F X-Google-Smtp-Source: ABdhPJz2JxxtHnwSdlcwZziGYIxl1ieUUdgfqJTu+cxGeypLzwESvA4JNhON3W17leeRXqzI+9CwY7QJBly4r0oCrsPK X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:f693:9fff:feef:c8f8]) (user=axelrasmussen job=sendgmr) by 2002:a25:be52:: with SMTP id d18mr5875372ybm.176.1606854774534; Tue, 01 Dec 2020 12:32:54 -0800 (PST) Date: Tue, 1 Dec 2020 12:32:49 -0800 Message-Id: <20201201203249.4172751-1-axelrasmussen@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2] mm: mmap_lock: fix use-after-free race and css ref leak in tracepoints From: Axel Rasmussen To: Andrew Morton , Chinwen Chang , Daniel Jordan , David Rientjes , Davidlohr Bueso , Ingo Molnar , Jann Horn , Laurent Dufour , Michel Lespinasse , Stephen Rothwell , Steven Rostedt , Vlastimil Babka Cc: Yafang Shao , davem@davemloft.net, dsahern@kernel.org, gregkh@linuxfoundation.org, kuba@kernel.org, liuhangbin@gmail.com, tj@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: syzbot reported[1] a use-after-free introduced in 0f818c4bc1f3. The bug is that an ongoing trace event might race with the tracepoint being disabled (and therefore the _unreg() callback being called). Consider this ordering: T1: trace event fires, get_mm_memcg_path() is called T1: get_memcg_path_buf() returns a buffer pointer T2: trace_mmap_lock_unreg() is called, buffers are freed T1: cgroup_path() is called with the now-freed buffer The solution in this commit is to switch to mutex + RCU. With the RCU API we can first stop new buffers from being handed out, then wait for existing users to finish, and *then* free the buffers. I have a simple reproducer program which spins up two pools of threads, doing the following in a tight loop: Pool 1: mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0) munmap() Pool 2: echo 1 > /sys/kernel/debug/tracing/events/mmap_lock/enable echo 0 > /sys/kernel/debug/tracing/events/mmap_lock/enable This triggers the use-after-free very quickly. With this patch, I let it run for an hour without any BUGs. While fixing this, I also noticed and fixed a css ref leak. Previously we called get_mem_cgroup_from_mm(), but we never called css_put() to release that reference. get_mm_memcg_path() now does this properly. [1]: https://syzkaller.appspot.com/bug?extid=19e6dd9943972fa1c58a Signed-off-by: Axel Rasmussen --- mm/mmap_lock.c | 104 +++++++++++++++++++++++++++++++------------------ 1 file changed, 66 insertions(+), 38 deletions(-) diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c index 12af8f1b8a14..5a3349bf1501 100644 --- a/mm/mmap_lock.c +++ b/mm/mmap_lock.c @@ -6,9 +6,10 @@ #include #include #include +#include #include +#include #include -#include #include EXPORT_TRACEPOINT_SYMBOL(mmap_lock_start_locking); @@ -23,8 +24,8 @@ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_released); * concurrent _reg() and _unreg() calls, and count how many _reg() calls have * been made. */ -static DEFINE_SPINLOCK(reg_lock); -static int reg_refcount; +static DEFINE_MUTEX(reg_lock); +static int reg_refcount; /* Protected by reg_lock. */ /* * Size of the buffer for memcg path names. Ignoring stack trace support, @@ -38,99 +39,126 @@ static int reg_refcount; */ #define CONTEXT_COUNT 4 -DEFINE_PER_CPU(char *, memcg_path_buf); -DEFINE_PER_CPU(int, memcg_path_buf_idx); +static DEFINE_PER_CPU(char __rcu *, memcg_path_buf); +static DEFINE_PER_CPU(int, memcg_path_buf_idx); + +/* Called with reg_lock held. */ +static void free_memcg_path_bufs(void) +{ + int cpu; + char *old; + + for_each_possible_cpu(cpu) { + old = rcu_dereference_protected(per_cpu(memcg_path_buf, cpu), + lockdep_is_held(®_lock)); + if (old == NULL) + break; + rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), NULL); + /* Wait for inflight memcg_path_buf users to finish. */ + synchronize_rcu(); + kfree(old); + } +} int trace_mmap_lock_reg(void) { - unsigned long flags; int cpu; + char *new; - spin_lock_irqsave(®_lock, flags); + mutex_lock(®_lock); + /* If the refcount is going 0->1, proceed with allocating buffers. */ if (reg_refcount++) goto out; for_each_possible_cpu(cpu) { - per_cpu(memcg_path_buf, cpu) = NULL; - } - for_each_possible_cpu(cpu) { - per_cpu(memcg_path_buf, cpu) = kmalloc( - MEMCG_PATH_BUF_SIZE * CONTEXT_COUNT, GFP_NOWAIT); - if (per_cpu(memcg_path_buf, cpu) == NULL) + new = kmalloc(MEMCG_PATH_BUF_SIZE * CONTEXT_COUNT, GFP_KERNEL); + if (new == NULL) goto out_fail; - per_cpu(memcg_path_buf_idx, cpu) = 0; + rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), new); + /* Don't need to wait for inflights, they'd have gotten NULL. */ } out: - spin_unlock_irqrestore(®_lock, flags); + mutex_unlock(®_lock); return 0; out_fail: - for_each_possible_cpu(cpu) { - if (per_cpu(memcg_path_buf, cpu) != NULL) - kfree(per_cpu(memcg_path_buf, cpu)); - else - break; - } + free_memcg_path_bufs(); + /* Since we failed, undo the earlier ref increment. */ --reg_refcount; - spin_unlock_irqrestore(®_lock, flags); + mutex_unlock(®_lock); return -ENOMEM; } void trace_mmap_lock_unreg(void) { - unsigned long flags; - int cpu; - - spin_lock_irqsave(®_lock, flags); + mutex_lock(®_lock); + /* If the refcount is going 1->0, proceed with freeing buffers. */ if (--reg_refcount) goto out; - for_each_possible_cpu(cpu) { - kfree(per_cpu(memcg_path_buf, cpu)); - } + free_memcg_path_bufs(); out: - spin_unlock_irqrestore(®_lock, flags); + mutex_unlock(®_lock); } static inline char *get_memcg_path_buf(void) { + char *buf; int idx; + rcu_read_lock(); + buf = rcu_dereference(*this_cpu_ptr(&memcg_path_buf)); + if (buf == NULL) + return NULL; idx = this_cpu_add_return(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE) - MEMCG_PATH_BUF_SIZE; - return &this_cpu_read(memcg_path_buf)[idx]; + return &buf[idx]; } static inline void put_memcg_path_buf(void) { this_cpu_sub(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE); + rcu_read_unlock(); } /* * Write the given mm_struct's memcg path to a percpu buffer, and return a - * pointer to it. If the path cannot be determined, NULL is returned. + * pointer to it. If the path cannot be determined, or no buffer was available + * (because the trace event is being unregistered), NULL is returned. * * Note: buffers are allocated per-cpu to avoid locking, so preemption must be * disabled by the caller before calling us, and re-enabled only after the * caller is done with the pointer. + * + * The caller must call put_memcg_path_buf() once the buffer is no longer + * needed. This must be done while preemption is still disabled. */ static const char *get_mm_memcg_path(struct mm_struct *mm) { + char *buf = NULL; struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm); - if (memcg != NULL && likely(memcg->css.cgroup != NULL)) { - char *buf = get_memcg_path_buf(); + if (memcg == NULL) + goto out; + if (unlikely(memcg->css.cgroup == NULL)) + goto out_put; - cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE); - return buf; - } - return NULL; + buf = get_memcg_path_buf(); + if (buf == NULL) + goto out_put; + + cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE); + +out_put: + css_put(&memcg->css); +out: + return buf; } #define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \ -- 2.29.2.454.gaff20da3a2-goog