All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>, Mel Gorman <mgorman@suse.de>,
	Rik van Riel <riel@redhat.com>, Vlastimil Babka <vbabka@suse.cz>,
	Christoph Lameter <cl@gentwo.org>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	Steve Capper <steve.capper@linaro.org>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@suse.cz>,
	Jerome Marchand <jmarchan@redhat.com>,
	Sasha Levin <sasha.levin@oracle.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCHv10 04/36] mm, thp: adjust conditions when we can reuse the page on WP fault
Date: Thu,  3 Sep 2015 18:12:50 +0300	[thread overview]
Message-ID: <1441293202-137314-5-git-send-email-kirill.shutemov@linux.intel.com> (raw)
In-Reply-To: <1441293202-137314-1-git-send-email-kirill.shutemov@linux.intel.com>

With new refcounting we will be able map the same compound page with
PTEs and PMDs. It requires adjustment to conditions when we can reuse
the page on write-protection fault.

For PTE fault we can't reuse the page if it's part of huge page.

For PMD we can only reuse the page if nobody else maps the huge page or
it's part. We can do it by checking page_mapcount() on each sub-page,
but it's expensive.

The cheaper way is to check page_count() to be equal 1: every mapcount
takes page reference, so this way we can guarantee, that the PMD is the
only mapping.

This approach can give false negative if somebody pinned the page, but
that doesn't affect correctness.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Tested-by: Sasha Levin <sasha.levin@oracle.com>
Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/linux/swap.h |  3 ++-
 mm/huge_memory.c     | 12 +++++++++++-
 mm/swapfile.c        |  3 +++
 3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 9c7c4b418498..1184fdbd30ba 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -540,7 +540,8 @@ static inline int swp_swapcount(swp_entry_t entry)
 	return 0;
 }
 
-#define reuse_swap_page(page)	(page_mapcount(page) == 1)
+#define reuse_swap_page(page) \
+	(!PageTransCompound(page) && page_mapcount(page) == 1)
 
 static inline int try_to_free_swap(struct page *page)
 {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ad7c378e1f8c..18931b8ef1e7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1172,7 +1172,17 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
 
 	page = pmd_page(orig_pmd);
 	VM_BUG_ON_PAGE(!PageCompound(page) || !PageHead(page), page);
-	if (page_mapcount(page) == 1) {
+	/*
+	 * We can only reuse the page if nobody else maps the huge page or it's
+	 * part. We can do it by checking page_mapcount() on each sub-page, but
+	 * it's expensive.
+	 * The cheaper way is to check page_count() to be equal 1: every
+	 * mapcount takes page reference reference, so this way we can
+	 * guarantee, that the PMD is the only mapping.
+	 * This can give false negative if somebody pinned the page, but that's
+	 * fine.
+	 */
+	if (page_mapcount(page) == 1 && page_count(page) == 1) {
 		pmd_t entry;
 		entry = pmd_mkyoung(orig_pmd);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index f131bc1838d3..c6aec93c8c0b 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -929,6 +929,9 @@ int reuse_swap_page(struct page *page)
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	if (unlikely(PageKsm(page)))
 		return 0;
+	/* The page is part of THP and cannot be reused */
+	if (PageTransCompound(page))
+		return 0;
 	count = page_mapcount(page);
 	if (count <= 1 && PageSwapCache(page)) {
 		count += page_swapcount(page);
-- 
2.5.0


WARNING: multiple messages have this Message-ID (diff)
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>, Mel Gorman <mgorman@suse.de>,
	Rik van Riel <riel@redhat.com>, Vlastimil Babka <vbabka@suse.cz>,
	Christoph Lameter <cl@gentwo.org>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	Steve Capper <steve.capper@linaro.org>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@suse.cz>,
	Jerome Marchand <jmarchan@redhat.com>,
	Sasha Levin <sasha.levin@oracle.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCHv10 04/36] mm, thp: adjust conditions when we can reuse the page on WP fault
Date: Thu,  3 Sep 2015 18:12:50 +0300	[thread overview]
Message-ID: <1441293202-137314-5-git-send-email-kirill.shutemov@linux.intel.com> (raw)
In-Reply-To: <1441293202-137314-1-git-send-email-kirill.shutemov@linux.intel.com>

With new refcounting we will be able map the same compound page with
PTEs and PMDs. It requires adjustment to conditions when we can reuse
the page on write-protection fault.

For PTE fault we can't reuse the page if it's part of huge page.

For PMD we can only reuse the page if nobody else maps the huge page or
it's part. We can do it by checking page_mapcount() on each sub-page,
but it's expensive.

The cheaper way is to check page_count() to be equal 1: every mapcount
takes page reference, so this way we can guarantee, that the PMD is the
only mapping.

This approach can give false negative if somebody pinned the page, but
that doesn't affect correctness.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Tested-by: Sasha Levin <sasha.levin@oracle.com>
Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/linux/swap.h |  3 ++-
 mm/huge_memory.c     | 12 +++++++++++-
 mm/swapfile.c        |  3 +++
 3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 9c7c4b418498..1184fdbd30ba 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -540,7 +540,8 @@ static inline int swp_swapcount(swp_entry_t entry)
 	return 0;
 }
 
-#define reuse_swap_page(page)	(page_mapcount(page) == 1)
+#define reuse_swap_page(page) \
+	(!PageTransCompound(page) && page_mapcount(page) == 1)
 
 static inline int try_to_free_swap(struct page *page)
 {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ad7c378e1f8c..18931b8ef1e7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1172,7 +1172,17 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
 
 	page = pmd_page(orig_pmd);
 	VM_BUG_ON_PAGE(!PageCompound(page) || !PageHead(page), page);
-	if (page_mapcount(page) == 1) {
+	/*
+	 * We can only reuse the page if nobody else maps the huge page or it's
+	 * part. We can do it by checking page_mapcount() on each sub-page, but
+	 * it's expensive.
+	 * The cheaper way is to check page_count() to be equal 1: every
+	 * mapcount takes page reference reference, so this way we can
+	 * guarantee, that the PMD is the only mapping.
+	 * This can give false negative if somebody pinned the page, but that's
+	 * fine.
+	 */
+	if (page_mapcount(page) == 1 && page_count(page) == 1) {
 		pmd_t entry;
 		entry = pmd_mkyoung(orig_pmd);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index f131bc1838d3..c6aec93c8c0b 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -929,6 +929,9 @@ int reuse_swap_page(struct page *page)
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	if (unlikely(PageKsm(page)))
 		return 0;
+	/* The page is part of THP and cannot be reused */
+	if (PageTransCompound(page))
+		return 0;
 	count = page_mapcount(page);
 	if (count <= 1 && PageSwapCache(page)) {
 		count += page_swapcount(page);
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2015-09-03 15:25 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-03 15:12 [PATCHv10 00/36] THP refcounting redesign Kirill A. Shutemov
2015-09-03 15:12 ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 01/36] mm, proc: adjust PSS calculation Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 02/36] rmap: add argument to charge compound page Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 03/36] memcg: adjust to support new THP refcounting Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` Kirill A. Shutemov [this message]
2015-09-03 15:12   ` [PATCHv10 04/36] mm, thp: adjust conditions when we can reuse the page on WP fault Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 05/36] mm: adjust FOLL_SPLIT for new refcounting Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 06/36] mm: handle PTE-mapped tail pages in gerneric fast gup implementaiton Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 07/36] thp, mlock: do not allow huge pages in mlocked area Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 08/36] khugepaged: ignore pmd tables with THP mapped with ptes Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 09/36] thp: rename split_huge_page_pmd() to split_huge_pmd() Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 10/36] mm, vmstats: new THP splitting event Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 11/36] mm: temporally mark THP broken Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 12/36] thp: drop all split_huge_page()-related code Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:12 ` [PATCHv10 13/36] mm: drop tail page refcounting Kirill A. Shutemov
2015-09-03 15:12   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 14/36] futex, thp: remove special case for THP in get_futex_key Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 15/36] ksm: prepare to new THP semantics Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 16/36] mm, thp: remove compound_lock Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 17/36] arm64, thp: remove infrastructure for handling splitting PMDs Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 18/36] arm, " Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 19/36] mips, " Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 20/36] powerpc, " Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 21/36] s390, " Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 22/36] sparc, " Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 23/36] tile, " Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 24/36] x86, " Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 25/36] mm, " Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 26/36] mm: rework mapcount accounting to enable 4k mapping of THPs Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 27/36] mm: differentiate page_mapped() from page_mapcount() for compound pages Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 28/36] mm, numa: skip PTE-mapped THP on numa fault Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 29/36] thp: implement split_huge_pmd() Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 30/36] thp: add option to setup migration entries during PMD split Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 31/36] thp, mm: split_huge_page(): caller need to lock page Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 32/36] thp: reintroduce split_huge_page() Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 33/36] migrate_pages: try to split pages on qeueuing Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 34/36] thp: introduce deferred_split_huge_page() Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 35/36] mm: re-enable THP Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:13 ` [PATCHv10 36/36] thp: update documentation Kirill A. Shutemov
2015-09-03 15:13   ` Kirill A. Shutemov
2015-09-03 15:16 ` [PATCHv10 37/36, RFC] thp: allow mlocked THP again Kirill A. Shutemov
2015-09-03 15:16   ` Kirill A. Shutemov
2015-09-11 13:22   ` Vlastimil Babka
2015-09-11 13:22     ` Vlastimil Babka
2015-09-14 11:05     ` Kirill A. Shutemov
2015-09-14 11:05       ` Kirill A. Shutemov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1441293202-137314-5-git-send-email-kirill.shutemov@linux.intel.com \
    --to=kirill.shutemov@linux.intel.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=cl@gentwo.org \
    --cc=dave.hansen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=jmarchan@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.cz \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=riel@redhat.com \
    --cc=sasha.levin@oracle.com \
    --cc=steve.capper@linaro.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.