All of lore.kernel.org
 help / color / mirror / Atom feed
From: wei.guo.simon@gmail.com
To: linux-mm@kvack.org
Cc: Alexey Klimov <klimov.linux@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Eric B Munson <emunson@akamai.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@suse.com>, Shuah Khan <shuah@kernel.org>,
	Simon Guo <wei.guo.simon@gmail.com>,
	Thierry Reding <treding@nvidia.com>,
	Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH 1/4] mm: mlock: check against vma for actual mlock() size
Date: Tue, 30 Aug 2016 18:59:38 +0800	[thread overview]
Message-ID: <1472554781-9835-2-git-send-email-wei.guo.simon@gmail.com> (raw)
In-Reply-To: <1472554781-9835-1-git-send-email-wei.guo.simon@gmail.com>

From: Simon Guo <wei.guo.simon@gmail.com>

In do_mlock(), the check against locked memory limitation
has a hole which will fail following cases at step 3):
1) User has a memory chunk from addressA with 50k, and user
mem lock rlimit is 64k.
2) mlock(addressA, 30k)
3) mlock(addressA, 40k)

The 3rd step should have been allowed since the 40k request
is intersected with the previous 30k at step 2), and the
3rd step is actually for mlock on the extra 10k memory.

This patch checks vma to caculate the actual "new" mlock
size, if necessary, and ajust the logic to fix this issue.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 mm/mlock.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/mm/mlock.c b/mm/mlock.c
index 14645be..9283187 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -617,6 +617,43 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
 	return error;
 }
 
+/*
+ * Go through vma areas and sum size of mlocked
+ * vma pages, as return value.
+ * Note deferred memory locking case(mlock2(,,MLOCK_ONFAULT)
+ * is also counted.
+ * Return value: previously mlocked page counts
+ */
+static int count_mm_mlocked_page_nr(struct mm_struct *mm,
+		unsigned long start, size_t len)
+{
+	struct vm_area_struct *vma;
+	int count = 0;
+
+	if (mm == NULL)
+		mm = current->mm;
+
+	vma = find_vma(mm, start);
+	if (vma == NULL)
+		vma = mm->mmap;
+
+	for (; vma ; vma = vma->vm_next) {
+		if (start + len <=  vma->vm_start)
+			break;
+		if (vma->vm_flags && VM_LOCKED) {
+			if (start > vma->vm_start)
+				count -= (start - vma->vm_start);
+			if (start + len < vma->vm_end) {
+				count += start + len - vma->vm_start;
+				break;
+			}
+			count += vma->vm_end - vma->vm_start;
+		}
+	}
+
+	return (PAGE_ALIGN(count) >> PAGE_SHIFT);
+}
+
 static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t flags)
 {
 	unsigned long locked;
@@ -639,6 +676,18 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla
 		return -EINTR;
 
 	locked += current->mm->locked_vm;
+	if ((locked > lock_limit) && (!capable(CAP_IPC_LOCK))) {
+		/*
+		 * It is possible that the regions requested
+		 * intersect with previously mlocked areas,
+		 * that part area in "mm->locked_vm" should
+		 * not be counted to new mlock increment
+		 * count. So check and adjust locked count
+		 * if necessary.
+		 */
+		locked -= count_mm_mlocked_page_nr(current->mm,
+				start, len);
+	}
 
 	/* check against resource limits */
 	if ((locked <= lock_limit) || capable(CAP_IPC_LOCK))
-- 
1.8.3.1

WARNING: multiple messages have this Message-ID (diff)
From: wei.guo.simon@gmail.com
To: linux-mm@kvack.org
Cc: Alexey Klimov <klimov.linux@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Eric B Munson <emunson@akamai.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@suse.com>, Shuah Khan <shuah@kernel.org>,
	Simon Guo <wei.guo.simon@gmail.com>,
	Thierry Reding <treding@nvidia.com>,
	Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH 1/4] mm: mlock: check against vma for actual mlock() size
Date: Tue, 30 Aug 2016 18:59:38 +0800	[thread overview]
Message-ID: <1472554781-9835-2-git-send-email-wei.guo.simon@gmail.com> (raw)
In-Reply-To: <1472554781-9835-1-git-send-email-wei.guo.simon@gmail.com>

From: Simon Guo <wei.guo.simon@gmail.com>

In do_mlock(), the check against locked memory limitation
has a hole which will fail following cases at step 3):
1) User has a memory chunk from addressA with 50k, and user
mem lock rlimit is 64k.
2) mlock(addressA, 30k)
3) mlock(addressA, 40k)

The 3rd step should have been allowed since the 40k request
is intersected with the previous 30k at step 2), and the
3rd step is actually for mlock on the extra 10k memory.

This patch checks vma to caculate the actual "new" mlock
size, if necessary, and ajust the logic to fix this issue.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 mm/mlock.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/mm/mlock.c b/mm/mlock.c
index 14645be..9283187 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -617,6 +617,43 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
 	return error;
 }
 
+/*
+ * Go through vma areas and sum size of mlocked
+ * vma pages, as return value.
+ * Note deferred memory locking case(mlock2(,,MLOCK_ONFAULT)
+ * is also counted.
+ * Return value: previously mlocked page counts
+ */
+static int count_mm_mlocked_page_nr(struct mm_struct *mm,
+		unsigned long start, size_t len)
+{
+	struct vm_area_struct *vma;
+	int count = 0;
+
+	if (mm == NULL)
+		mm = current->mm;
+
+	vma = find_vma(mm, start);
+	if (vma == NULL)
+		vma = mm->mmap;
+
+	for (; vma ; vma = vma->vm_next) {
+		if (start + len <=  vma->vm_start)
+			break;
+		if (vma->vm_flags && VM_LOCKED) {
+			if (start > vma->vm_start)
+				count -= (start - vma->vm_start);
+			if (start + len < vma->vm_end) {
+				count += start + len - vma->vm_start;
+				break;
+			}
+			count += vma->vm_end - vma->vm_start;
+		}
+	}
+
+	return (PAGE_ALIGN(count) >> PAGE_SHIFT);
+}
+
 static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t flags)
 {
 	unsigned long locked;
@@ -639,6 +676,18 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla
 		return -EINTR;
 
 	locked += current->mm->locked_vm;
+	if ((locked > lock_limit) && (!capable(CAP_IPC_LOCK))) {
+		/*
+		 * It is possible that the regions requested
+		 * intersect with previously mlocked areas,
+		 * that part area in "mm->locked_vm" should
+		 * not be counted to new mlock increment
+		 * count. So check and adjust locked count
+		 * if necessary.
+		 */
+		locked -= count_mm_mlocked_page_nr(current->mm,
+				start, len);
+	}
 
 	/* check against resource limits */
 	if ((locked <= lock_limit) || capable(CAP_IPC_LOCK))
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-08-30 11:00 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-30 10:59 [PATCH 0/4] mm: mlock: fix some locked_vm counting issues wei.guo.simon
2016-08-30 10:59 ` wei.guo.simon
2016-08-30 10:59 ` wei.guo.simon [this message]
2016-08-30 10:59   ` [PATCH 1/4] mm: mlock: check against vma for actual mlock() size wei.guo.simon
2016-08-30 11:35   ` Kirill A. Shutemov
2016-08-30 11:35     ` Kirill A. Shutemov
2016-08-30 10:59 ` [PATCH 2/4] mm: mlock: avoid increase mm->locked_vm on mlock() when already mlock2(,MLOCK_ONFAULT) wei.guo.simon
2016-08-30 10:59   ` wei.guo.simon
2016-08-30 11:36   ` Kirill A. Shutemov
2016-08-30 11:36     ` Kirill A. Shutemov
2016-08-30 10:59 ` [PATCH 3/4] selftest: split mlock2_ funcs into separate mlock2.h wei.guo.simon
2016-08-30 10:59   ` wei.guo.simon
2016-08-30 10:59 ` [PATCH 4/4] selftests/vm: add test for mlock() when areas are intersected wei.guo.simon
2016-08-30 10:59   ` wei.guo.simon
2016-08-31 23:14   ` David Rientjes
2016-08-31 23:14     ` David Rientjes
2016-09-01  7:14     ` Simon Guo
2016-09-01  7:14       ` Simon Guo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1472554781-9835-2-git-send-email-wei.guo.simon@gmail.com \
    --to=wei.guo.simon@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=emunson@akamai.com \
    --cc=geert@linux-m68k.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=klimov.linux@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=shuah@kernel.org \
    --cc=treding@nvidia.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.