From: Song Liu <songliubraving@fb.com>
To: <linux-kernel@vger.kernel.org>
Cc: <kernel-team@fb.com>, Song Liu <songliubraving@fb.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Arnaldo Carvalho de Melo <acme@redhat.com>,
Jiri Olsa <jolsa@kernel.org>,
Peter Zijlstra <peterz@infradead.org>
Subject: [PATCH] perf/core: fix mlock accounting in perf_mmap()
Date: Fri, 17 Jan 2020 15:45:03 -0800 [thread overview]
Message-ID: <20200117234503.1324050-1-songliubraving@fb.com> (raw)
sysctl_perf_event_mlock and user->locked_vm can change value
independently, so we can't guarantee:
user->locked_vm <= user_lock_limit
When user->locked_vm is larger than user_lock_limit, we cannot simply
update extra and user_extra as:
extra = user_locked - user_lock_limit;
user_extra -= extra;
Otherwise, user_extra will be negative. In extreme cases, this may lead to
negative user->locked_vm (until this perf-mmap is closed), which break
locked_vm badly.
Fix this with two separate conditions, which make sure user_extra is
always positive.
Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again")
Signed-off-by: Song Liu <songliubraving@fb.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
---
kernel/events/core.c | 28 ++++++++++++++++++++++++----
1 file changed, 24 insertions(+), 4 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index a1f8bde19b56..89acdd1574ef 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5920,11 +5920,31 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
if (user_locked > user_lock_limit) {
/*
- * charge locked_vm until it hits user_lock_limit;
- * charge the rest from pinned_vm
+ * sysctl_perf_event_mlock and user->locked_vm can change
+ * value independently, so we can't guarantee:
+ *
+ * user->locked_vm <= user_lock_limit
+ *
+ * We need be careful to make sure user_extra >=0.
+ *
+ * Using "user_locked - user_extra" to avoid calling
+ * atomic_long_read() again.
*/
- extra = user_locked - user_lock_limit;
- user_extra -= extra;
+ if (user_locked - user_extra >= user_lock_limit) {
+ /*
+ * already used all user_locked_limit, charge all
+ * to pinned_vm
+ */
+ extra = user_extra;
+ user_extra = 0;
+ } else {
+ /*
+ * charge locked_vm until it hits user_lock_limit;
+ * charge the rest from pinned_vm
+ */
+ extra = user_locked - user_lock_limit;
+ user_extra -= extra;
+ }
}
lock_limit = rlimit(RLIMIT_MEMLOCK);
--
2.17.1
next reply other threads:[~2020-01-17 23:45 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-17 23:45 Song Liu [this message]
2020-01-20 8:24 ` [PATCH] perf/core: fix mlock accounting in perf_mmap() Alexander Shishkin
2020-01-21 18:55 ` Song Liu
2020-01-23 9:19 ` Alexander Shishkin
2020-01-23 17:24 ` Song Liu
2020-01-21 19:35 ` Song Liu
2020-01-22 8:50 ` Alexander Shishkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200117234503.1324050-1-songliubraving@fb.com \
--to=songliubraving@fb.com \
--cc=acme@redhat.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=jolsa@kernel.org \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).