All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/6] mm: support madvise(MADV_FREE)
@ 2014-03-14  6:37 ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

This patch is an attempt to support MADV_FREE for Linux.

Rationale is following as.

Allocators call munmap(2) when user call free(3) if ptr is
in mmaped area. But munmap isn't cheap because it have to clean up
all pte entries, unlinking a vma and returns free pages to buddy
so overhead would be increased linearly by mmaped area's size.
So they like madvise_dontneed rather than munmap.

"dontneed" holds read-side lock of mmap_sem so other threads
of the process could go with concurrent page faults so it is
better than munmap if it's not lack of address space.
But the problem is that most of allocator reuses that address
space soonish so applications see page fault, page allocation,
page zeroing if allocator already called madvise_dontneed
on the address space.

For avoidng that overheads, other OS have supported MADV_FREE.
The idea is just mark pages as lazyfree when madvise called
and purge them if memory pressure happens. Otherwise, VM doesn't
detach pages on the address space so application could use
that memory space without above overheads.

I tweaked jamalloc to use MADV_FREE for the testing.

diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
index 8a42e75..20e31af 100644
--- a/src/chunk_mmap.c
+++ b/src/chunk_mmap.c
@@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
 #  else
 #    error "No method defined for purging unused dirty pages."
 #  endif
-       int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
+       int err = madvise(addr, length, 5);
        unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
 #  undef JEMALLOC_MADV_PURGE
 #  undef JEMALLOC_MADV_ZEROS


RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)

(1.1) stands for 1 process and 1 thread so for exmaple,
(1.4) is 1 process and 4 thread.

vanilla jemalloc	 patched jemalloc

1.1       1.1
records:  5              records:  5
avg:      7404.60        avg:      14059.80
std:      116.67(1.58%)  std:      93.92(0.67%)
max:      7564.00        max:      14152.00
min:      7288.00        min:      13893.00
1.4       1.4
records:  5              records:  5
avg:      16160.80       avg:      30173.00
std:      509.80(3.15%)  std:      3050.72(10.11%)
max:      16728.00       max:      33989.00
min:      15216.00       min:      25173.00
1.8       1.8
records:  5              records:  5
avg:      16003.00       avg:      30080.20
std:      290.40(1.81%)  std:      2063.57(6.86%)
max:      16537.00       max:      32735.00
min:      15727.00       min:      27381.00
4.1       4.1
records:  5              records:  5
avg:      4003.60        avg:      8064.80
std:      65.33(1.63%)   std:      143.89(1.78%)
max:      4118.00        max:      8319.00
min:      3921.00        min:      7888.00
4.4       4.4
records:  5              records:  5
avg:      3907.40        avg:      7199.80
std:      48.68(1.25%)   std:      80.21(1.11%)
max:      3997.00        max:      7320.00
min:      3863.00        min:      7113.00
4.8       4.8
records:  5              records:  5
avg:      3893.00        avg:      7195.20
std:      19.11(0.49%)   std:      101.55(1.41%)
max:      3927.00        max:      7309.00
min:      3869.00        min:      7012.00
8.1       8.1
records:  5              records:  5
avg:      1942.00        avg:      3602.80
std:      34.60(1.78%)   std:      22.97(0.64%)
max:      2010.00        max:      3632.00
min:      1913.00        min:      3563.00
8.4       8.4
records:  5              records:  5
avg:      1938.00        avg:      3405.60
std:      32.77(1.69%)   std:      36.25(1.06%)
max:      1998.00        max:      3468.00
min:      1905.00        min:      3374.00
8.8       8.8
records:  5              records:  5
avg:      1977.80        avg:      3434.20
std:      25.75(1.30%)   std:      57.95(1.69%)
max:      2011.00        max:      3533.00
min:      1937.00        min:      3363.00

So, MADV_FREE is 2 time faster than MADV_DONTNEED for
every cases.

I didn't test a lot but it's enough to show the concept and
direction before LSF/MM.

Patchset is based on 3.14-rc6.

Welcome any comment!

Minchan Kim (6):
  mm: clean up PAGE_MAPPING_FLAGS
  mm: work deactivate_page with anon pages
  mm: support madvise(MADV_FREE)
  mm: add stat about lazyfree pages
  mm: reclaim lazyfree pages in swapless system
  mm: ksm: don't merge lazyfree page

 include/asm-generic/tlb.h              |  9 ++++++++
 include/linux/mm.h                     | 39 +++++++++++++++++++++++++++++++++-
 include/linux/mm_inline.h              |  9 ++++++++
 include/linux/mmzone.h                 |  1 +
 include/linux/rmap.h                   |  1 +
 include/linux/swap.h                   | 15 +++++++++++++
 include/linux/vm_event_item.h          |  1 +
 include/uapi/asm-generic/mman-common.h |  1 +
 mm/ksm.c                               | 18 +++++++++++-----
 mm/madvise.c                           | 17 +++++++++++++--
 mm/memory.c                            | 12 ++++++++++-
 mm/page_alloc.c                        |  5 ++++-
 mm/rmap.c                              | 25 ++++++++++++++++++----
 mm/swap.c                              | 20 ++++++++---------
 mm/swap_state.c                        | 38 ++++++++++++++++++++++++++++++++-
 mm/vmscan.c                            | 32 +++++++++++++++++++++++++---
 mm/vmstat.c                            |  2 ++
 17 files changed, 217 insertions(+), 28 deletions(-)

-- 
1.9.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 0/6] mm: support madvise(MADV_FREE)
@ 2014-03-14  6:37 ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

This patch is an attempt to support MADV_FREE for Linux.

Rationale is following as.

Allocators call munmap(2) when user call free(3) if ptr is
in mmaped area. But munmap isn't cheap because it have to clean up
all pte entries, unlinking a vma and returns free pages to buddy
so overhead would be increased linearly by mmaped area's size.
So they like madvise_dontneed rather than munmap.

"dontneed" holds read-side lock of mmap_sem so other threads
of the process could go with concurrent page faults so it is
better than munmap if it's not lack of address space.
But the problem is that most of allocator reuses that address
space soonish so applications see page fault, page allocation,
page zeroing if allocator already called madvise_dontneed
on the address space.

For avoidng that overheads, other OS have supported MADV_FREE.
The idea is just mark pages as lazyfree when madvise called
and purge them if memory pressure happens. Otherwise, VM doesn't
detach pages on the address space so application could use
that memory space without above overheads.

I tweaked jamalloc to use MADV_FREE for the testing.

diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
index 8a42e75..20e31af 100644
--- a/src/chunk_mmap.c
+++ b/src/chunk_mmap.c
@@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
 #  else
 #    error "No method defined for purging unused dirty pages."
 #  endif
-       int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
+       int err = madvise(addr, length, 5);
        unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
 #  undef JEMALLOC_MADV_PURGE
 #  undef JEMALLOC_MADV_ZEROS


RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)

(1.1) stands for 1 process and 1 thread so for exmaple,
(1.4) is 1 process and 4 thread.

vanilla jemalloc	 patched jemalloc

1.1       1.1
records:  5              records:  5
avg:      7404.60        avg:      14059.80
std:      116.67(1.58%)  std:      93.92(0.67%)
max:      7564.00        max:      14152.00
min:      7288.00        min:      13893.00
1.4       1.4
records:  5              records:  5
avg:      16160.80       avg:      30173.00
std:      509.80(3.15%)  std:      3050.72(10.11%)
max:      16728.00       max:      33989.00
min:      15216.00       min:      25173.00
1.8       1.8
records:  5              records:  5
avg:      16003.00       avg:      30080.20
std:      290.40(1.81%)  std:      2063.57(6.86%)
max:      16537.00       max:      32735.00
min:      15727.00       min:      27381.00
4.1       4.1
records:  5              records:  5
avg:      4003.60        avg:      8064.80
std:      65.33(1.63%)   std:      143.89(1.78%)
max:      4118.00        max:      8319.00
min:      3921.00        min:      7888.00
4.4       4.4
records:  5              records:  5
avg:      3907.40        avg:      7199.80
std:      48.68(1.25%)   std:      80.21(1.11%)
max:      3997.00        max:      7320.00
min:      3863.00        min:      7113.00
4.8       4.8
records:  5              records:  5
avg:      3893.00        avg:      7195.20
std:      19.11(0.49%)   std:      101.55(1.41%)
max:      3927.00        max:      7309.00
min:      3869.00        min:      7012.00
8.1       8.1
records:  5              records:  5
avg:      1942.00        avg:      3602.80
std:      34.60(1.78%)   std:      22.97(0.64%)
max:      2010.00        max:      3632.00
min:      1913.00        min:      3563.00
8.4       8.4
records:  5              records:  5
avg:      1938.00        avg:      3405.60
std:      32.77(1.69%)   std:      36.25(1.06%)
max:      1998.00        max:      3468.00
min:      1905.00        min:      3374.00
8.8       8.8
records:  5              records:  5
avg:      1977.80        avg:      3434.20
std:      25.75(1.30%)   std:      57.95(1.69%)
max:      2011.00        max:      3533.00
min:      1937.00        min:      3363.00

So, MADV_FREE is 2 time faster than MADV_DONTNEED for
every cases.

I didn't test a lot but it's enough to show the concept and
direction before LSF/MM.

Patchset is based on 3.14-rc6.

Welcome any comment!

Minchan Kim (6):
  mm: clean up PAGE_MAPPING_FLAGS
  mm: work deactivate_page with anon pages
  mm: support madvise(MADV_FREE)
  mm: add stat about lazyfree pages
  mm: reclaim lazyfree pages in swapless system
  mm: ksm: don't merge lazyfree page

 include/asm-generic/tlb.h              |  9 ++++++++
 include/linux/mm.h                     | 39 +++++++++++++++++++++++++++++++++-
 include/linux/mm_inline.h              |  9 ++++++++
 include/linux/mmzone.h                 |  1 +
 include/linux/rmap.h                   |  1 +
 include/linux/swap.h                   | 15 +++++++++++++
 include/linux/vm_event_item.h          |  1 +
 include/uapi/asm-generic/mman-common.h |  1 +
 mm/ksm.c                               | 18 +++++++++++-----
 mm/madvise.c                           | 17 +++++++++++++--
 mm/memory.c                            | 12 ++++++++++-
 mm/page_alloc.c                        |  5 ++++-
 mm/rmap.c                              | 25 ++++++++++++++++++----
 mm/swap.c                              | 20 ++++++++---------
 mm/swap_state.c                        | 38 ++++++++++++++++++++++++++++++++-
 mm/vmscan.c                            | 32 +++++++++++++++++++++++++---
 mm/vmstat.c                            |  2 ++
 17 files changed, 217 insertions(+), 28 deletions(-)

-- 
1.9.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 1/6] mm: clean up PAGE_MAPPING_FLAGS
  2014-03-14  6:37 ` Minchan Kim
@ 2014-03-14  6:37   ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

It's preparation to squeeze a new flag PAGE_MAPPING_LZFREE so
functions to get a anon_vma from mapping shouldn't assume that
+/- PAGE_MAPPING_ANON is enough.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/rmap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index d9d42316a99a..76069afa6b81 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -412,7 +412,7 @@ struct anon_vma *page_get_anon_vma(struct page *page)
 	if (!page_mapped(page))
 		goto out;
 
-	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
+	anon_vma = page_rmapping(page);
 	if (!atomic_inc_not_zero(&anon_vma->refcount)) {
 		anon_vma = NULL;
 		goto out;
@@ -455,7 +455,7 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page)
 	if (!page_mapped(page))
 		goto out;
 
-	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
+	anon_vma = page_rmapping(page);
 	root_anon_vma = ACCESS_ONCE(anon_vma->root);
 	if (down_read_trylock(&root_anon_vma->rwsem)) {
 		/*
-- 
1.9.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 1/6] mm: clean up PAGE_MAPPING_FLAGS
@ 2014-03-14  6:37   ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

It's preparation to squeeze a new flag PAGE_MAPPING_LZFREE so
functions to get a anon_vma from mapping shouldn't assume that
+/- PAGE_MAPPING_ANON is enough.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/rmap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index d9d42316a99a..76069afa6b81 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -412,7 +412,7 @@ struct anon_vma *page_get_anon_vma(struct page *page)
 	if (!page_mapped(page))
 		goto out;
 
-	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
+	anon_vma = page_rmapping(page);
 	if (!atomic_inc_not_zero(&anon_vma->refcount)) {
 		anon_vma = NULL;
 		goto out;
@@ -455,7 +455,7 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page)
 	if (!page_mapped(page))
 		goto out;
 
-	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
+	anon_vma = page_rmapping(page);
 	root_anon_vma = ACCESS_ONCE(anon_vma->root);
 	if (down_read_trylock(&root_anon_vma->rwsem)) {
 		/*
-- 
1.9.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 2/6] mm: work deactivate_page with anon pages
  2014-03-14  6:37 ` Minchan Kim
@ 2014-03-14  6:37   ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

Now, deactivate_page works for file page but MADV_FREE will use
it to move lazyfree pages to inactive LRU's tail so this patch
makes deactivate_page work with anon pages as well as file pages.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/mm_inline.h |  9 +++++++++
 mm/swap.c                 | 20 ++++++++++----------
 2 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index cf55945c83fb..0503caafd532 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -22,6 +22,15 @@ static inline int page_is_file_cache(struct page *page)
 	return !PageSwapBacked(page);
 }
 
+static __always_inline void add_page_to_lru_list_tail(struct page *page,
+				struct lruvec *lruvec, enum lru_list lru)
+{
+	int nr_pages = hpage_nr_pages(page);
+	mem_cgroup_update_lru_size(lruvec, lru, nr_pages);
+	list_add_tail(&page->lru, &lruvec->lists[lru]);
+	__mod_zone_page_state(lruvec_zone(lruvec), NR_LRU_BASE + lru, nr_pages);
+}
+
 static __always_inline void add_page_to_lru_list(struct page *page,
 				struct lruvec *lruvec, enum lru_list lru)
 {
diff --git a/mm/swap.c b/mm/swap.c
index 0092097b3f4c..ac13714b5d8b 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -643,14 +643,11 @@ void add_page_to_unevictable_list(struct page *page)
  * If the page isn't page_mapped and dirty/writeback, the page
  * could reclaim asap using PG_reclaim.
  *
- * 1. active, mapped page -> none
- * 2. active, dirty/writeback page -> inactive, head, PG_reclaim
- * 3. inactive, mapped page -> none
- * 4. inactive, dirty/writeback page -> inactive, head, PG_reclaim
- * 5. inactive, clean -> inactive, tail
- * 6. Others -> none
+ * 1. file mapped page -> none
+ * 2. dirty/writeback page -> head of inactive with PG_reclaim
+ * 3. inactive, clean -> tail of inactive
  *
- * In 4, why it moves inactive's head, the VM expects the page would
+ * In 2, why it moves inactive's head, the VM expects the page would
  * be write it out by flusher threads as this is much more effective
  * than the single-page writeout from reclaim.
  */
@@ -667,7 +664,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
 		return;
 
 	/* Some processes are using the page */
-	if (page_mapped(page))
+	if (!PageAnon(page) && page_mapped(page))
 		return;
 
 	active = PageActive(page);
@@ -677,7 +674,6 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
 	del_page_from_lru_list(page, lruvec, lru + active);
 	ClearPageActive(page);
 	ClearPageReferenced(page);
-	add_page_to_lru_list(page, lruvec, lru);
 
 	if (PageWriteback(page) || PageDirty(page)) {
 		/*
@@ -686,12 +682,16 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
 		 * is _really_ small and  it's non-critical problem.
 		 */
 		SetPageReclaim(page);
+		add_page_to_lru_list(page, lruvec, lru);
 	} else {
 		/*
 		 * The page's writeback ends up during pagevec
 		 * We moves tha page into tail of inactive.
+		 *
+		 * The lazyfree page move into lru's tail to
+		 * discard easily.
 		 */
-		list_move_tail(&page->lru, &lruvec->lists[lru]);
+		add_page_to_lru_list_tail(page, lruvec, lru);
 		__count_vm_event(PGROTATED);
 	}
 
-- 
1.9.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 2/6] mm: work deactivate_page with anon pages
@ 2014-03-14  6:37   ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

Now, deactivate_page works for file page but MADV_FREE will use
it to move lazyfree pages to inactive LRU's tail so this patch
makes deactivate_page work with anon pages as well as file pages.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/mm_inline.h |  9 +++++++++
 mm/swap.c                 | 20 ++++++++++----------
 2 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index cf55945c83fb..0503caafd532 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -22,6 +22,15 @@ static inline int page_is_file_cache(struct page *page)
 	return !PageSwapBacked(page);
 }
 
+static __always_inline void add_page_to_lru_list_tail(struct page *page,
+				struct lruvec *lruvec, enum lru_list lru)
+{
+	int nr_pages = hpage_nr_pages(page);
+	mem_cgroup_update_lru_size(lruvec, lru, nr_pages);
+	list_add_tail(&page->lru, &lruvec->lists[lru]);
+	__mod_zone_page_state(lruvec_zone(lruvec), NR_LRU_BASE + lru, nr_pages);
+}
+
 static __always_inline void add_page_to_lru_list(struct page *page,
 				struct lruvec *lruvec, enum lru_list lru)
 {
diff --git a/mm/swap.c b/mm/swap.c
index 0092097b3f4c..ac13714b5d8b 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -643,14 +643,11 @@ void add_page_to_unevictable_list(struct page *page)
  * If the page isn't page_mapped and dirty/writeback, the page
  * could reclaim asap using PG_reclaim.
  *
- * 1. active, mapped page -> none
- * 2. active, dirty/writeback page -> inactive, head, PG_reclaim
- * 3. inactive, mapped page -> none
- * 4. inactive, dirty/writeback page -> inactive, head, PG_reclaim
- * 5. inactive, clean -> inactive, tail
- * 6. Others -> none
+ * 1. file mapped page -> none
+ * 2. dirty/writeback page -> head of inactive with PG_reclaim
+ * 3. inactive, clean -> tail of inactive
  *
- * In 4, why it moves inactive's head, the VM expects the page would
+ * In 2, why it moves inactive's head, the VM expects the page would
  * be write it out by flusher threads as this is much more effective
  * than the single-page writeout from reclaim.
  */
@@ -667,7 +664,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
 		return;
 
 	/* Some processes are using the page */
-	if (page_mapped(page))
+	if (!PageAnon(page) && page_mapped(page))
 		return;
 
 	active = PageActive(page);
@@ -677,7 +674,6 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
 	del_page_from_lru_list(page, lruvec, lru + active);
 	ClearPageActive(page);
 	ClearPageReferenced(page);
-	add_page_to_lru_list(page, lruvec, lru);
 
 	if (PageWriteback(page) || PageDirty(page)) {
 		/*
@@ -686,12 +682,16 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
 		 * is _really_ small and  it's non-critical problem.
 		 */
 		SetPageReclaim(page);
+		add_page_to_lru_list(page, lruvec, lru);
 	} else {
 		/*
 		 * The page's writeback ends up during pagevec
 		 * We moves tha page into tail of inactive.
+		 *
+		 * The lazyfree page move into lru's tail to
+		 * discard easily.
 		 */
-		list_move_tail(&page->lru, &lruvec->lists[lru]);
+		add_page_to_lru_list_tail(page, lruvec, lru);
 		__count_vm_event(PGROTATED);
 	}
 
-- 
1.9.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 3/6] mm: support madvise(MADV_FREE)
  2014-03-14  6:37 ` Minchan Kim
@ 2014-03-14  6:37   ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

Linux doesn't have an ability to free pages lazy while other OS
already have been supported that named by madvise(MADV_FREE).

The gain is clear that kernel can evict freed pages rather than
swapping out or OOM if memory pressure happens.

Without memory pressure, freed pages would be reused by userspace
without another additional overhead(ex, page fault + + page allocation
+ page zeroing).

Firstly, heavy users would be general allocators(ex, jemalloc,
I hope ptmalloc support it) and jemalloc already have supported
the feature for other OS(ex, FreeBSD)

At the moment, this patch would break build other ARCHs which have
own TLB flush scheme other than that x86 but if there is no objection
in this direction, I will add patches for handling other ARCHs
in next iteration.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/asm-generic/tlb.h              |  9 ++++++++
 include/linux/mm.h                     | 35 ++++++++++++++++++++++++++++++-
 include/linux/rmap.h                   |  1 +
 include/linux/swap.h                   | 15 ++++++++++++++
 include/uapi/asm-generic/mman-common.h |  1 +
 mm/madvise.c                           | 17 +++++++++++++--
 mm/memory.c                            | 12 ++++++++++-
 mm/rmap.c                              | 21 +++++++++++++++++--
 mm/swap_state.c                        | 38 +++++++++++++++++++++++++++++++++-
 mm/vmscan.c                            | 22 +++++++++++++++++++-
 10 files changed, 163 insertions(+), 8 deletions(-)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 5672d7ea1fa0..b82ee729a065 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -116,8 +116,17 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long
 void tlb_flush_mmu(struct mmu_gather *tlb);
 void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start,
 							unsigned long end);
+int __tlb_madvfree_page(struct mmu_gather *tlb, struct page *page);
 int __tlb_remove_page(struct mmu_gather *tlb, struct page *page);
 
+static inline void tlb_madvfree_page(struct mmu_gather *tlb, struct page *page)
+{
+	/* Prevent page free */
+	get_page(page);
+	if (!__tlb_remove_page(tlb, MarkLazyFree(page)))
+		tlb_flush_mmu(tlb);
+}
+
 /* tlb_remove_page
  *	Similar to __tlb_remove_page but will call tlb_flush_mmu() itself when
  *	required.
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c1b7414c7bef..9b048cabce27 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -933,10 +933,16 @@ void page_address_init(void);
  * Please note that, confusingly, "page_mapping" refers to the inode
  * address_space which maps the page from disk; whereas "page_mapped"
  * refers to user virtual address space into which the page is mapped.
+ *
+ * PAGE_MAPPING_LZFREE bit is set along with PAGE_MAPPING_ANON bit
+ * and then page->mapping points to an anon_vma. This flag is used
+ * for lazy freeing the page instead of swap.
  */
 #define PAGE_MAPPING_ANON	1
 #define PAGE_MAPPING_KSM	2
-#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
+#define PAGE_MAPPING_LZFREE	4
+#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM | \
+				 PAGE_MAPPING_LZFREE)
 
 extern struct address_space *page_mapping(struct page *page);
 
@@ -962,6 +968,32 @@ static inline int PageAnon(struct page *page)
 	return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0;
 }
 
+static inline void SetPageLazyFree(struct page *page)
+{
+	BUG_ON(!PageAnon(page));
+	BUG_ON(!PageLocked(page));
+
+	page->mapping = (void *)((unsigned long)page->mapping |
+			PAGE_MAPPING_LZFREE);
+}
+
+static inline void ClearPageLazyFree(struct page *page)
+{
+	BUG_ON(!PageAnon(page));
+	BUG_ON(!PageLocked(page));
+
+	page->mapping = (void *)((unsigned long)page->mapping &
+				~PAGE_MAPPING_LZFREE);
+}
+
+static inline int PageLazyFree(struct page *page)
+{
+	if (((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
+			(PAGE_MAPPING_ANON|PAGE_MAPPING_LZFREE))
+		return 1;
+	return 0;
+}
+
 /*
  * Return the pagecache index of the passed page.  Regular pagecache pages
  * use ->index whereas swapcache pages use ->private
@@ -1054,6 +1086,7 @@ struct zap_details {
 	struct address_space *check_mapping;	/* Check page->mapping if set */
 	pgoff_t	first_index;			/* Lowest page->index to unmap */
 	pgoff_t last_index;			/* Highest page->index to unmap */
+	int lazy_free;				/* do lazy free */
 };
 
 struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 1da693d51255..19e74aebb3d5 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -75,6 +75,7 @@ enum ttu_flags {
 	TTU_UNMAP = 0,			/* unmap mode */
 	TTU_MIGRATION = 1,		/* migration mode */
 	TTU_MUNLOCK = 2,		/* munlock mode */
+	TTU_LAZYFREE  = 3,		/* free lazyfree page */
 	TTU_ACTION_MASK = 0xff,
 
 	TTU_IGNORE_MLOCK = (1 << 8),	/* ignore mlock */
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 46ba0c6c219f..223909c14703 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -13,6 +13,21 @@
 #include <linux/page-flags.h>
 #include <asm/page.h>
 
+static inline struct page *MarkLazyFree(struct page *p)
+{
+	return (struct page *)((unsigned long)p | 0x1UL);
+}
+
+static inline struct page *ClearLazyFree(struct page *p)
+{
+	return (struct page *)((unsigned long)p & ~0x1UL);
+}
+
+static inline bool LazyFree(struct page *p)
+{
+	return ((unsigned long)p & 0x1UL) ? true : false;
+}
+
 struct notifier_block;
 
 struct bio;
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index 4164529a94f9..7e257e49be2e 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -34,6 +34,7 @@
 #define MADV_SEQUENTIAL	2		/* expect sequential page references */
 #define MADV_WILLNEED	3		/* will need these pages */
 #define MADV_DONTNEED	4		/* don't need these pages */
+#define MADV_FREE	5		/* do lazy free */
 
 /* common parameters: try to keep these consistent across architectures */
 #define MADV_REMOVE	9		/* remove these pages & resources */
diff --git a/mm/madvise.c b/mm/madvise.c
index 539eeb96b323..2e904289a2bb 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -31,6 +31,7 @@ static int madvise_need_mmap_write(int behavior)
 	case MADV_REMOVE:
 	case MADV_WILLNEED:
 	case MADV_DONTNEED:
+	case MADV_FREE:
 		return 0;
 	default:
 		/* be safe, default to 1. list exceptions explicitly */
@@ -272,7 +273,8 @@ static long madvise_willneed(struct vm_area_struct *vma,
  */
 static long madvise_dontneed(struct vm_area_struct *vma,
 			     struct vm_area_struct **prev,
-			     unsigned long start, unsigned long end)
+			     unsigned long start, unsigned long end,
+			     int behavior)
 {
 	*prev = vma;
 	if (vma->vm_flags & (VM_LOCKED|VM_HUGETLB|VM_PFNMAP))
@@ -284,8 +286,17 @@ static long madvise_dontneed(struct vm_area_struct *vma,
 			.last_index = ULONG_MAX,
 		};
 		zap_page_range(vma, start, end - start, &details);
+	} else if (behavior == MADV_FREE) {
+		struct zap_details details = {
+			.lazy_free = 1,
+		};
+
+		if (vma->vm_file)
+			return -EINVAL;
+		zap_page_range(vma, start, end - start, &details);
 	} else
 		zap_page_range(vma, start, end - start, NULL);
+
 	return 0;
 }
 
@@ -384,8 +395,9 @@ madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev,
 		return madvise_remove(vma, prev, start, end);
 	case MADV_WILLNEED:
 		return madvise_willneed(vma, prev, start, end);
+	case MADV_FREE:
 	case MADV_DONTNEED:
-		return madvise_dontneed(vma, prev, start, end);
+		return madvise_dontneed(vma, prev, start, end, behavior);
 	default:
 		return madvise_behavior(vma, prev, start, end, behavior);
 	}
@@ -403,6 +415,7 @@ madvise_behavior_valid(int behavior)
 	case MADV_REMOVE:
 	case MADV_WILLNEED:
 	case MADV_DONTNEED:
+	case MADV_FREE:
 #ifdef CONFIG_KSM
 	case MADV_MERGEABLE:
 	case MADV_UNMERGEABLE:
diff --git a/mm/memory.c b/mm/memory.c
index 22dfa617bddb..f1f0dc13e8d1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1093,6 +1093,15 @@ again:
 
 			page = vm_normal_page(vma, addr, ptent);
 			if (unlikely(details) && page) {
+				if (details->lazy_free && PageAnon(page)) {
+					ptent = pte_mkold(ptent);
+					ptent = pte_mkclean(ptent);
+					set_pte_at(mm, addr, pte, ptent);
+					tlb_remove_tlb_entry(tlb, pte, addr);
+					tlb_madvfree_page(tlb, page);
+					continue;
+				}
+
 				/*
 				 * unmap_shared_mapping_pages() wants to
 				 * invalidate cache without truncating:
@@ -1276,7 +1285,8 @@ static void unmap_page_range(struct mmu_gather *tlb,
 	pgd_t *pgd;
 	unsigned long next;
 
-	if (details && !details->check_mapping && !details->nonlinear_vma)
+	if (details && !details->check_mapping && !details->nonlinear_vma &&
+		!details->lazy_free)
 		details = NULL;
 
 	BUG_ON(addr >= end);
diff --git a/mm/rmap.c b/mm/rmap.c
index 76069afa6b81..7712f39acfee 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -377,6 +377,15 @@ void __init anon_vma_init(void)
 	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
 }
 
+static inline bool is_anon_vma(unsigned long mapping)
+{
+	unsigned long anon_mapping = mapping & PAGE_MAPPING_FLAGS;
+	if ((anon_mapping != PAGE_MAPPING_ANON) &&
+	    (anon_mapping != (PAGE_MAPPING_ANON|PAGE_MAPPING_LZFREE)))
+		return false;
+	return true;
+}
+
 /*
  * Getting a lock on a stable anon_vma from a page off the LRU is tricky!
  *
@@ -407,7 +416,7 @@ struct anon_vma *page_get_anon_vma(struct page *page)
 
 	rcu_read_lock();
 	anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
-	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+	if (!is_anon_vma(anon_mapping))
 		goto out;
 	if (!page_mapped(page))
 		goto out;
@@ -450,7 +459,7 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page)
 
 	rcu_read_lock();
 	anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
-	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+	if (!is_anon_vma(anon_mapping))
 		goto out;
 	if (!page_mapped(page))
 		goto out;
@@ -1165,6 +1174,14 @@ int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 		}
 		set_pte_at(mm, address, pte,
 			   swp_entry_to_pte(make_hwpoison_entry(page)));
+	} else if ((flags & TTU_LAZYFREE) && PageLazyFree(page)) {
+		BUG_ON(!PageAnon(page));
+		if (unlikely(pte_dirty(pteval))) {
+			set_pte_at(mm, address, pte, pteval);
+			ret = SWAP_FAIL;
+			goto out_unmap;
+		}
+		dec_mm_counter(mm, MM_ANONPAGES);
 	} else if (PageAnon(page)) {
 		swp_entry_t entry = { .val = page_private(page) };
 		pte_t swp_pte;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index e76ace30d436..0718ecd166dc 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -18,6 +18,7 @@
 #include <linux/pagevec.h>
 #include <linux/migrate.h>
 #include <linux/page_cgroup.h>
+#include <linux/ksm.h>
 
 #include <asm/pgtable.h>
 
@@ -256,8 +257,36 @@ void free_page_and_swap_cache(struct page *page)
 }
 
 /*
+ * move @page to inactive LRU's tail so that VM can discard it
+ * rather than swapping hot pages out when memory pressure happens.
+ */
+static bool move_lazyfree(struct page *page)
+{
+	if (!trylock_page(page))
+		return false;
+
+	if (PageKsm(page)) {
+		unlock_page(page);
+		return false;
+	}
+
+	if (PageSwapCache(page) &&
+			try_to_free_swap(page))
+		ClearPageDirty(page);
+
+	if (!PageLazyFree(page)) {
+		SetPageLazyFree(page);
+		deactivate_page(page);
+	}
+
+	unlock_page(page);
+	return true;
+}
+
+/*
  * Passed an array of pages, drop them all from swapcache and then release
  * them.  They are removed from the LRU and freed if this is their last use.
+ * If page passed are lazyfree, deactivate them intead of freeing.
  */
 void free_pages_and_swap_cache(struct page **pages, int nr)
 {
@@ -269,7 +298,14 @@ void free_pages_and_swap_cache(struct page **pages, int nr)
 		int i;
 
 		for (i = 0; i < todo; i++)
-			free_swap_cache(pagep[i]);
+			if (LazyFree(pagep[i])) {
+				pagep[i] = ClearLazyFree(pagep[i]);
+				/* If we failed, just free */
+				if (!move_lazyfree(pagep[i]))
+					free_swap_cache(pagep[i]);
+			} else {
+				free_swap_cache(pagep[i]);
+			}
 		release_pages(pagep, todo, 0);
 		pagep += todo;
 		nr -= todo;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a9c74b409681..0ab38faebe98 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -817,6 +817,25 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 
 		sc->nr_scanned++;
 
+		if (PageLazyFree(page)) {
+			switch (try_to_unmap(page, ttu_flags)) {
+			case SWAP_FAIL:
+				ClearPageLazyFree(page);
+				goto activate_locked;
+			case SWAP_AGAIN:
+				ClearPageLazyFree(page);
+				goto keep_locked;
+			case SWAP_SUCCESS:
+				ClearPageLazyFree(page);
+				if (unlikely(PageSwapCache(page)))
+					try_to_free_swap(page);
+				if (!page_freeze_refs(page, 1))
+					goto keep_locked;
+				unlock_page(page);
+				goto free_it;
+			}
+		}
+
 		if (unlikely(!page_evictable(page)))
 			goto cull_mlocked;
 
@@ -1481,7 +1500,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	if (nr_taken == 0)
 		return 0;
 
-	nr_reclaimed = shrink_page_list(&page_list, zone, sc, TTU_UNMAP,
+	nr_reclaimed = shrink_page_list(&page_list, zone, sc,
+				TTU_UNMAP|TTU_LAZYFREE,
 				&nr_dirty, &nr_unqueued_dirty, &nr_congested,
 				&nr_writeback, &nr_immediate,
 				false);
-- 
1.9.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 3/6] mm: support madvise(MADV_FREE)
@ 2014-03-14  6:37   ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

Linux doesn't have an ability to free pages lazy while other OS
already have been supported that named by madvise(MADV_FREE).

The gain is clear that kernel can evict freed pages rather than
swapping out or OOM if memory pressure happens.

Without memory pressure, freed pages would be reused by userspace
without another additional overhead(ex, page fault + + page allocation
+ page zeroing).

Firstly, heavy users would be general allocators(ex, jemalloc,
I hope ptmalloc support it) and jemalloc already have supported
the feature for other OS(ex, FreeBSD)

At the moment, this patch would break build other ARCHs which have
own TLB flush scheme other than that x86 but if there is no objection
in this direction, I will add patches for handling other ARCHs
in next iteration.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/asm-generic/tlb.h              |  9 ++++++++
 include/linux/mm.h                     | 35 ++++++++++++++++++++++++++++++-
 include/linux/rmap.h                   |  1 +
 include/linux/swap.h                   | 15 ++++++++++++++
 include/uapi/asm-generic/mman-common.h |  1 +
 mm/madvise.c                           | 17 +++++++++++++--
 mm/memory.c                            | 12 ++++++++++-
 mm/rmap.c                              | 21 +++++++++++++++++--
 mm/swap_state.c                        | 38 +++++++++++++++++++++++++++++++++-
 mm/vmscan.c                            | 22 +++++++++++++++++++-
 10 files changed, 163 insertions(+), 8 deletions(-)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 5672d7ea1fa0..b82ee729a065 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -116,8 +116,17 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long
 void tlb_flush_mmu(struct mmu_gather *tlb);
 void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start,
 							unsigned long end);
+int __tlb_madvfree_page(struct mmu_gather *tlb, struct page *page);
 int __tlb_remove_page(struct mmu_gather *tlb, struct page *page);
 
+static inline void tlb_madvfree_page(struct mmu_gather *tlb, struct page *page)
+{
+	/* Prevent page free */
+	get_page(page);
+	if (!__tlb_remove_page(tlb, MarkLazyFree(page)))
+		tlb_flush_mmu(tlb);
+}
+
 /* tlb_remove_page
  *	Similar to __tlb_remove_page but will call tlb_flush_mmu() itself when
  *	required.
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c1b7414c7bef..9b048cabce27 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -933,10 +933,16 @@ void page_address_init(void);
  * Please note that, confusingly, "page_mapping" refers to the inode
  * address_space which maps the page from disk; whereas "page_mapped"
  * refers to user virtual address space into which the page is mapped.
+ *
+ * PAGE_MAPPING_LZFREE bit is set along with PAGE_MAPPING_ANON bit
+ * and then page->mapping points to an anon_vma. This flag is used
+ * for lazy freeing the page instead of swap.
  */
 #define PAGE_MAPPING_ANON	1
 #define PAGE_MAPPING_KSM	2
-#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
+#define PAGE_MAPPING_LZFREE	4
+#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM | \
+				 PAGE_MAPPING_LZFREE)
 
 extern struct address_space *page_mapping(struct page *page);
 
@@ -962,6 +968,32 @@ static inline int PageAnon(struct page *page)
 	return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0;
 }
 
+static inline void SetPageLazyFree(struct page *page)
+{
+	BUG_ON(!PageAnon(page));
+	BUG_ON(!PageLocked(page));
+
+	page->mapping = (void *)((unsigned long)page->mapping |
+			PAGE_MAPPING_LZFREE);
+}
+
+static inline void ClearPageLazyFree(struct page *page)
+{
+	BUG_ON(!PageAnon(page));
+	BUG_ON(!PageLocked(page));
+
+	page->mapping = (void *)((unsigned long)page->mapping &
+				~PAGE_MAPPING_LZFREE);
+}
+
+static inline int PageLazyFree(struct page *page)
+{
+	if (((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
+			(PAGE_MAPPING_ANON|PAGE_MAPPING_LZFREE))
+		return 1;
+	return 0;
+}
+
 /*
  * Return the pagecache index of the passed page.  Regular pagecache pages
  * use ->index whereas swapcache pages use ->private
@@ -1054,6 +1086,7 @@ struct zap_details {
 	struct address_space *check_mapping;	/* Check page->mapping if set */
 	pgoff_t	first_index;			/* Lowest page->index to unmap */
 	pgoff_t last_index;			/* Highest page->index to unmap */
+	int lazy_free;				/* do lazy free */
 };
 
 struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 1da693d51255..19e74aebb3d5 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -75,6 +75,7 @@ enum ttu_flags {
 	TTU_UNMAP = 0,			/* unmap mode */
 	TTU_MIGRATION = 1,		/* migration mode */
 	TTU_MUNLOCK = 2,		/* munlock mode */
+	TTU_LAZYFREE  = 3,		/* free lazyfree page */
 	TTU_ACTION_MASK = 0xff,
 
 	TTU_IGNORE_MLOCK = (1 << 8),	/* ignore mlock */
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 46ba0c6c219f..223909c14703 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -13,6 +13,21 @@
 #include <linux/page-flags.h>
 #include <asm/page.h>
 
+static inline struct page *MarkLazyFree(struct page *p)
+{
+	return (struct page *)((unsigned long)p | 0x1UL);
+}
+
+static inline struct page *ClearLazyFree(struct page *p)
+{
+	return (struct page *)((unsigned long)p & ~0x1UL);
+}
+
+static inline bool LazyFree(struct page *p)
+{
+	return ((unsigned long)p & 0x1UL) ? true : false;
+}
+
 struct notifier_block;
 
 struct bio;
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index 4164529a94f9..7e257e49be2e 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -34,6 +34,7 @@
 #define MADV_SEQUENTIAL	2		/* expect sequential page references */
 #define MADV_WILLNEED	3		/* will need these pages */
 #define MADV_DONTNEED	4		/* don't need these pages */
+#define MADV_FREE	5		/* do lazy free */
 
 /* common parameters: try to keep these consistent across architectures */
 #define MADV_REMOVE	9		/* remove these pages & resources */
diff --git a/mm/madvise.c b/mm/madvise.c
index 539eeb96b323..2e904289a2bb 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -31,6 +31,7 @@ static int madvise_need_mmap_write(int behavior)
 	case MADV_REMOVE:
 	case MADV_WILLNEED:
 	case MADV_DONTNEED:
+	case MADV_FREE:
 		return 0;
 	default:
 		/* be safe, default to 1. list exceptions explicitly */
@@ -272,7 +273,8 @@ static long madvise_willneed(struct vm_area_struct *vma,
  */
 static long madvise_dontneed(struct vm_area_struct *vma,
 			     struct vm_area_struct **prev,
-			     unsigned long start, unsigned long end)
+			     unsigned long start, unsigned long end,
+			     int behavior)
 {
 	*prev = vma;
 	if (vma->vm_flags & (VM_LOCKED|VM_HUGETLB|VM_PFNMAP))
@@ -284,8 +286,17 @@ static long madvise_dontneed(struct vm_area_struct *vma,
 			.last_index = ULONG_MAX,
 		};
 		zap_page_range(vma, start, end - start, &details);
+	} else if (behavior == MADV_FREE) {
+		struct zap_details details = {
+			.lazy_free = 1,
+		};
+
+		if (vma->vm_file)
+			return -EINVAL;
+		zap_page_range(vma, start, end - start, &details);
 	} else
 		zap_page_range(vma, start, end - start, NULL);
+
 	return 0;
 }
 
@@ -384,8 +395,9 @@ madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev,
 		return madvise_remove(vma, prev, start, end);
 	case MADV_WILLNEED:
 		return madvise_willneed(vma, prev, start, end);
+	case MADV_FREE:
 	case MADV_DONTNEED:
-		return madvise_dontneed(vma, prev, start, end);
+		return madvise_dontneed(vma, prev, start, end, behavior);
 	default:
 		return madvise_behavior(vma, prev, start, end, behavior);
 	}
@@ -403,6 +415,7 @@ madvise_behavior_valid(int behavior)
 	case MADV_REMOVE:
 	case MADV_WILLNEED:
 	case MADV_DONTNEED:
+	case MADV_FREE:
 #ifdef CONFIG_KSM
 	case MADV_MERGEABLE:
 	case MADV_UNMERGEABLE:
diff --git a/mm/memory.c b/mm/memory.c
index 22dfa617bddb..f1f0dc13e8d1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1093,6 +1093,15 @@ again:
 
 			page = vm_normal_page(vma, addr, ptent);
 			if (unlikely(details) && page) {
+				if (details->lazy_free && PageAnon(page)) {
+					ptent = pte_mkold(ptent);
+					ptent = pte_mkclean(ptent);
+					set_pte_at(mm, addr, pte, ptent);
+					tlb_remove_tlb_entry(tlb, pte, addr);
+					tlb_madvfree_page(tlb, page);
+					continue;
+				}
+
 				/*
 				 * unmap_shared_mapping_pages() wants to
 				 * invalidate cache without truncating:
@@ -1276,7 +1285,8 @@ static void unmap_page_range(struct mmu_gather *tlb,
 	pgd_t *pgd;
 	unsigned long next;
 
-	if (details && !details->check_mapping && !details->nonlinear_vma)
+	if (details && !details->check_mapping && !details->nonlinear_vma &&
+		!details->lazy_free)
 		details = NULL;
 
 	BUG_ON(addr >= end);
diff --git a/mm/rmap.c b/mm/rmap.c
index 76069afa6b81..7712f39acfee 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -377,6 +377,15 @@ void __init anon_vma_init(void)
 	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
 }
 
+static inline bool is_anon_vma(unsigned long mapping)
+{
+	unsigned long anon_mapping = mapping & PAGE_MAPPING_FLAGS;
+	if ((anon_mapping != PAGE_MAPPING_ANON) &&
+	    (anon_mapping != (PAGE_MAPPING_ANON|PAGE_MAPPING_LZFREE)))
+		return false;
+	return true;
+}
+
 /*
  * Getting a lock on a stable anon_vma from a page off the LRU is tricky!
  *
@@ -407,7 +416,7 @@ struct anon_vma *page_get_anon_vma(struct page *page)
 
 	rcu_read_lock();
 	anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
-	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+	if (!is_anon_vma(anon_mapping))
 		goto out;
 	if (!page_mapped(page))
 		goto out;
@@ -450,7 +459,7 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page)
 
 	rcu_read_lock();
 	anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
-	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+	if (!is_anon_vma(anon_mapping))
 		goto out;
 	if (!page_mapped(page))
 		goto out;
@@ -1165,6 +1174,14 @@ int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 		}
 		set_pte_at(mm, address, pte,
 			   swp_entry_to_pte(make_hwpoison_entry(page)));
+	} else if ((flags & TTU_LAZYFREE) && PageLazyFree(page)) {
+		BUG_ON(!PageAnon(page));
+		if (unlikely(pte_dirty(pteval))) {
+			set_pte_at(mm, address, pte, pteval);
+			ret = SWAP_FAIL;
+			goto out_unmap;
+		}
+		dec_mm_counter(mm, MM_ANONPAGES);
 	} else if (PageAnon(page)) {
 		swp_entry_t entry = { .val = page_private(page) };
 		pte_t swp_pte;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index e76ace30d436..0718ecd166dc 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -18,6 +18,7 @@
 #include <linux/pagevec.h>
 #include <linux/migrate.h>
 #include <linux/page_cgroup.h>
+#include <linux/ksm.h>
 
 #include <asm/pgtable.h>
 
@@ -256,8 +257,36 @@ void free_page_and_swap_cache(struct page *page)
 }
 
 /*
+ * move @page to inactive LRU's tail so that VM can discard it
+ * rather than swapping hot pages out when memory pressure happens.
+ */
+static bool move_lazyfree(struct page *page)
+{
+	if (!trylock_page(page))
+		return false;
+
+	if (PageKsm(page)) {
+		unlock_page(page);
+		return false;
+	}
+
+	if (PageSwapCache(page) &&
+			try_to_free_swap(page))
+		ClearPageDirty(page);
+
+	if (!PageLazyFree(page)) {
+		SetPageLazyFree(page);
+		deactivate_page(page);
+	}
+
+	unlock_page(page);
+	return true;
+}
+
+/*
  * Passed an array of pages, drop them all from swapcache and then release
  * them.  They are removed from the LRU and freed if this is their last use.
+ * If page passed are lazyfree, deactivate them intead of freeing.
  */
 void free_pages_and_swap_cache(struct page **pages, int nr)
 {
@@ -269,7 +298,14 @@ void free_pages_and_swap_cache(struct page **pages, int nr)
 		int i;
 
 		for (i = 0; i < todo; i++)
-			free_swap_cache(pagep[i]);
+			if (LazyFree(pagep[i])) {
+				pagep[i] = ClearLazyFree(pagep[i]);
+				/* If we failed, just free */
+				if (!move_lazyfree(pagep[i]))
+					free_swap_cache(pagep[i]);
+			} else {
+				free_swap_cache(pagep[i]);
+			}
 		release_pages(pagep, todo, 0);
 		pagep += todo;
 		nr -= todo;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a9c74b409681..0ab38faebe98 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -817,6 +817,25 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 
 		sc->nr_scanned++;
 
+		if (PageLazyFree(page)) {
+			switch (try_to_unmap(page, ttu_flags)) {
+			case SWAP_FAIL:
+				ClearPageLazyFree(page);
+				goto activate_locked;
+			case SWAP_AGAIN:
+				ClearPageLazyFree(page);
+				goto keep_locked;
+			case SWAP_SUCCESS:
+				ClearPageLazyFree(page);
+				if (unlikely(PageSwapCache(page)))
+					try_to_free_swap(page);
+				if (!page_freeze_refs(page, 1))
+					goto keep_locked;
+				unlock_page(page);
+				goto free_it;
+			}
+		}
+
 		if (unlikely(!page_evictable(page)))
 			goto cull_mlocked;
 
@@ -1481,7 +1500,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	if (nr_taken == 0)
 		return 0;
 
-	nr_reclaimed = shrink_page_list(&page_list, zone, sc, TTU_UNMAP,
+	nr_reclaimed = shrink_page_list(&page_list, zone, sc,
+				TTU_UNMAP|TTU_LAZYFREE,
 				&nr_dirty, &nr_unqueued_dirty, &nr_congested,
 				&nr_writeback, &nr_immediate,
 				false);
-- 
1.9.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 4/6] mm: add stat about lazyfree pages
  2014-03-14  6:37 ` Minchan Kim
@ 2014-03-14  6:37   ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

This patch adds new vmstat for lazyfree pages so that admin
could check how many of lazyfree pages remains each zone
and how many of lazyfree pages purged until now.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/mm.h            | 4 ++++
 include/linux/mmzone.h        | 1 +
 include/linux/vm_event_item.h | 1 +
 mm/page_alloc.c               | 5 ++++-
 mm/vmscan.c                   | 1 +
 mm/vmstat.c                   | 2 ++
 6 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9b048cabce27..498613946991 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -975,6 +975,8 @@ static inline void SetPageLazyFree(struct page *page)
 
 	page->mapping = (void *)((unsigned long)page->mapping |
 			PAGE_MAPPING_LZFREE);
+
+	__inc_zone_page_state(page, NR_LAZYFREE_PAGES);
 }
 
 static inline void ClearPageLazyFree(struct page *page)
@@ -984,6 +986,8 @@ static inline void ClearPageLazyFree(struct page *page)
 
 	page->mapping = (void *)((unsigned long)page->mapping &
 				~PAGE_MAPPING_LZFREE);
+
+	__dec_zone_page_state(page, NR_LAZYFREE_PAGES);
 }
 
 static inline int PageLazyFree(struct page *page)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 5f2052c83154..7366ec56ea73 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -113,6 +113,7 @@ enum zone_stat_item {
 	NR_ACTIVE_FILE,		/*  "     "     "   "       "         */
 	NR_UNEVICTABLE,		/*  "     "     "   "       "         */
 	NR_MLOCK,		/* mlock()ed pages found and moved off LRU */
+	NR_LAZYFREE_PAGES,	/* freeable pages at memory pressure */
 	NR_ANON_PAGES,	/* Mapped anonymous pages */
 	NR_FILE_MAPPED,	/* pagecache pages mapped into pagetables.
 			   only modified from process context */
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 3a712e2e7d76..6b5b870895da 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -25,6 +25,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		FOR_ALL_ZONES(PGALLOC),
 		PGFREE, PGACTIVATE, PGDEACTIVATE,
 		PGFAULT, PGMAJFAULT,
+		PGLAZYFREE,
 		FOR_ALL_ZONES(PGREFILL),
 		FOR_ALL_ZONES(PGSTEAL_KSWAPD),
 		FOR_ALL_ZONES(PGSTEAL_DIRECT),
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3bac76ae4b30..596f24ecf397 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -731,8 +731,11 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
 
-	if (PageAnon(page))
+	if (PageAnon(page)) {
+		if (PageLazyFree(page))
+			__dec_zone_page_state(page, NR_LAZYFREE_PAGES);
 		page->mapping = NULL;
+	}
 	for (i = 0; i < (1 << order); i++)
 		bad += free_pages_check(page + i);
 	if (bad)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0ab38faebe98..98a1c3ffcaab 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -832,6 +832,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 				if (!page_freeze_refs(page, 1))
 					goto keep_locked;
 				unlock_page(page);
+				count_vm_event(PGLAZYFREE);
 				goto free_it;
 			}
 		}
diff --git a/mm/vmstat.c b/mm/vmstat.c
index def5dd2fbe61..4235aeb9b96e 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -742,6 +742,7 @@ const char * const vmstat_text[] = {
 	"nr_active_file",
 	"nr_unevictable",
 	"nr_mlock",
+	"nr_lazyfree_pages",
 	"nr_anon_pages",
 	"nr_mapped",
 	"nr_file_pages",
@@ -789,6 +790,7 @@ const char * const vmstat_text[] = {
 
 	"pgfault",
 	"pgmajfault",
+	"pglazyfree",
 
 	TEXTS_FOR_ZONES("pgrefill")
 	TEXTS_FOR_ZONES("pgsteal_kswapd")
-- 
1.9.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 4/6] mm: add stat about lazyfree pages
@ 2014-03-14  6:37   ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

This patch adds new vmstat for lazyfree pages so that admin
could check how many of lazyfree pages remains each zone
and how many of lazyfree pages purged until now.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/mm.h            | 4 ++++
 include/linux/mmzone.h        | 1 +
 include/linux/vm_event_item.h | 1 +
 mm/page_alloc.c               | 5 ++++-
 mm/vmscan.c                   | 1 +
 mm/vmstat.c                   | 2 ++
 6 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9b048cabce27..498613946991 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -975,6 +975,8 @@ static inline void SetPageLazyFree(struct page *page)
 
 	page->mapping = (void *)((unsigned long)page->mapping |
 			PAGE_MAPPING_LZFREE);
+
+	__inc_zone_page_state(page, NR_LAZYFREE_PAGES);
 }
 
 static inline void ClearPageLazyFree(struct page *page)
@@ -984,6 +986,8 @@ static inline void ClearPageLazyFree(struct page *page)
 
 	page->mapping = (void *)((unsigned long)page->mapping &
 				~PAGE_MAPPING_LZFREE);
+
+	__dec_zone_page_state(page, NR_LAZYFREE_PAGES);
 }
 
 static inline int PageLazyFree(struct page *page)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 5f2052c83154..7366ec56ea73 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -113,6 +113,7 @@ enum zone_stat_item {
 	NR_ACTIVE_FILE,		/*  "     "     "   "       "         */
 	NR_UNEVICTABLE,		/*  "     "     "   "       "         */
 	NR_MLOCK,		/* mlock()ed pages found and moved off LRU */
+	NR_LAZYFREE_PAGES,	/* freeable pages at memory pressure */
 	NR_ANON_PAGES,	/* Mapped anonymous pages */
 	NR_FILE_MAPPED,	/* pagecache pages mapped into pagetables.
 			   only modified from process context */
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 3a712e2e7d76..6b5b870895da 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -25,6 +25,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		FOR_ALL_ZONES(PGALLOC),
 		PGFREE, PGACTIVATE, PGDEACTIVATE,
 		PGFAULT, PGMAJFAULT,
+		PGLAZYFREE,
 		FOR_ALL_ZONES(PGREFILL),
 		FOR_ALL_ZONES(PGSTEAL_KSWAPD),
 		FOR_ALL_ZONES(PGSTEAL_DIRECT),
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3bac76ae4b30..596f24ecf397 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -731,8 +731,11 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
 
-	if (PageAnon(page))
+	if (PageAnon(page)) {
+		if (PageLazyFree(page))
+			__dec_zone_page_state(page, NR_LAZYFREE_PAGES);
 		page->mapping = NULL;
+	}
 	for (i = 0; i < (1 << order); i++)
 		bad += free_pages_check(page + i);
 	if (bad)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0ab38faebe98..98a1c3ffcaab 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -832,6 +832,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 				if (!page_freeze_refs(page, 1))
 					goto keep_locked;
 				unlock_page(page);
+				count_vm_event(PGLAZYFREE);
 				goto free_it;
 			}
 		}
diff --git a/mm/vmstat.c b/mm/vmstat.c
index def5dd2fbe61..4235aeb9b96e 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -742,6 +742,7 @@ const char * const vmstat_text[] = {
 	"nr_active_file",
 	"nr_unevictable",
 	"nr_mlock",
+	"nr_lazyfree_pages",
 	"nr_anon_pages",
 	"nr_mapped",
 	"nr_file_pages",
@@ -789,6 +790,7 @@ const char * const vmstat_text[] = {
 
 	"pgfault",
 	"pgmajfault",
+	"pglazyfree",
 
 	TEXTS_FOR_ZONES("pgrefill")
 	TEXTS_FOR_ZONES("pgsteal_kswapd")
-- 
1.9.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 5/6] mm: reclaim lazyfree pages in swapless system
  2014-03-14  6:37 ` Minchan Kim
@ 2014-03-14  6:37   ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

If there are lazyfree pages in system, shrink inactive anonymous
LRU to discard lazyfree pages regardless of existing avaialable
swap.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/vmscan.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 98a1c3ffcaab..ad73e053c581 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1889,8 +1889,13 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 	if (!global_reclaim(sc))
 		force_scan = true;
 
-	/* If we have no swap space, do not bother scanning anon pages. */
-	if (!sc->may_swap || (get_nr_swap_pages() <= 0)) {
+	/*
+	 * If we have no swap space and lazyfree pages,
+	 * do not bother scanning anon pages.
+	 */
+	if (!sc->may_swap ||
+		(get_nr_swap_pages() <= 0 &&
+			zone_page_state(zone, NR_LAZYFREE_PAGES) <= 0)) {
 		scan_balance = SCAN_FILE;
 		goto out;
 	}
-- 
1.9.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 5/6] mm: reclaim lazyfree pages in swapless system
@ 2014-03-14  6:37   ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

If there are lazyfree pages in system, shrink inactive anonymous
LRU to discard lazyfree pages regardless of existing avaialable
swap.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/vmscan.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 98a1c3ffcaab..ad73e053c581 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1889,8 +1889,13 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 	if (!global_reclaim(sc))
 		force_scan = true;
 
-	/* If we have no swap space, do not bother scanning anon pages. */
-	if (!sc->may_swap || (get_nr_swap_pages() <= 0)) {
+	/*
+	 * If we have no swap space and lazyfree pages,
+	 * do not bother scanning anon pages.
+	 */
+	if (!sc->may_swap ||
+		(get_nr_swap_pages() <= 0 &&
+			zone_page_state(zone, NR_LAZYFREE_PAGES) <= 0)) {
 		scan_balance = SCAN_FILE;
 		goto out;
 	}
-- 
1.9.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 6/6] mm: ksm: don't merge lazyfree page
  2014-03-14  6:37 ` Minchan Kim
@ 2014-03-14  6:37   ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

I didn't test this patch but just wanted to make lagefree pages KSM.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/ksm.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 68710e80994a..43ca73aa45e7 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -470,7 +470,8 @@ static struct page *get_mergeable_page(struct rmap_item *rmap_item)
 	page = follow_page(vma, addr, FOLL_GET);
 	if (IS_ERR_OR_NULL(page))
 		goto out;
-	if (PageAnon(page) || page_trans_compound_anon(page)) {
+	if ((PageAnon(page) && !PageLazyFree(page)) ||
+			page_trans_compound_anon(page)) {
 		flush_anon_page(vma, page, addr);
 		flush_dcache_page(page);
 	} else {
@@ -1032,13 +1033,20 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
 
 	/*
 	 * We need the page lock to read a stable PageSwapCache in
-	 * write_protect_page().  We use trylock_page() instead of
-	 * lock_page() because we don't want to wait here - we
-	 * prefer to continue scanning and merging different pages,
+	 * write_protect_page() and check lazyfree.
+	 * We use trylock_page() instead of lock_page() because we
+	 * don't want to wait here - we prefer to continue scanning
+	 * and merging different pages,
 	 * then come back to this page when it is unlocked.
 	 */
 	if (!trylock_page(page))
 		goto out;
+
+	if (PageLazyFree(page)) {
+		unlock_page(page);
+		goto out;
+	}
+
 	/*
 	 * If this anonymous page is mapped only here, its pte may need
 	 * to be write-protected.  If it's mapped elsewhere, all of its
@@ -1621,7 +1629,7 @@ next_mm:
 				cond_resched();
 				continue;
 			}
-			if (PageAnon(*page) ||
+			if ((PageAnon(*page) && !PageLazyFree(*page)) ||
 			    page_trans_compound_anon(*page)) {
 				flush_anon_page(vma, *page, ksm_scan.address);
 				flush_dcache_page(*page);
-- 
1.9.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC 6/6] mm: ksm: don't merge lazyfree page
@ 2014-03-14  6:37   ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  6:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans, Minchan Kim

I didn't test this patch but just wanted to make lagefree pages KSM.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/ksm.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 68710e80994a..43ca73aa45e7 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -470,7 +470,8 @@ static struct page *get_mergeable_page(struct rmap_item *rmap_item)
 	page = follow_page(vma, addr, FOLL_GET);
 	if (IS_ERR_OR_NULL(page))
 		goto out;
-	if (PageAnon(page) || page_trans_compound_anon(page)) {
+	if ((PageAnon(page) && !PageLazyFree(page)) ||
+			page_trans_compound_anon(page)) {
 		flush_anon_page(vma, page, addr);
 		flush_dcache_page(page);
 	} else {
@@ -1032,13 +1033,20 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
 
 	/*
 	 * We need the page lock to read a stable PageSwapCache in
-	 * write_protect_page().  We use trylock_page() instead of
-	 * lock_page() because we don't want to wait here - we
-	 * prefer to continue scanning and merging different pages,
+	 * write_protect_page() and check lazyfree.
+	 * We use trylock_page() instead of lock_page() because we
+	 * don't want to wait here - we prefer to continue scanning
+	 * and merging different pages,
 	 * then come back to this page when it is unlocked.
 	 */
 	if (!trylock_page(page))
 		goto out;
+
+	if (PageLazyFree(page)) {
+		unlock_page(page);
+		goto out;
+	}
+
 	/*
 	 * If this anonymous page is mapped only here, its pte may need
 	 * to be write-protected.  If it's mapped elsewhere, all of its
@@ -1621,7 +1629,7 @@ next_mm:
 				cond_resched();
 				continue;
 			}
-			if (PageAnon(*page) ||
+			if ((PageAnon(*page) && !PageLazyFree(*page)) ||
 			    page_trans_compound_anon(*page)) {
 				flush_anon_page(vma, *page, ksm_scan.address);
 				flush_dcache_page(*page);
-- 
1.9.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
  2014-03-14  6:37 ` Minchan Kim
@ 2014-03-14  7:37   ` Zhang Yanfei
  -1 siblings, 0 replies; 35+ messages in thread
From: Zhang Yanfei @ 2014-03-14  7:37 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

Hello Minchan

On 03/14/2014 02:37 PM, Minchan Kim wrote:
> This patch is an attempt to support MADV_FREE for Linux.
> 
> Rationale is following as.
> 
> Allocators call munmap(2) when user call free(3) if ptr is
> in mmaped area. But munmap isn't cheap because it have to clean up
> all pte entries, unlinking a vma and returns free pages to buddy
> so overhead would be increased linearly by mmaped area's size.
> So they like madvise_dontneed rather than munmap.
> 
> "dontneed" holds read-side lock of mmap_sem so other threads
> of the process could go with concurrent page faults so it is
> better than munmap if it's not lack of address space.
> But the problem is that most of allocator reuses that address
> space soonish so applications see page fault, page allocation,
> page zeroing if allocator already called madvise_dontneed
> on the address space.
> 
> For avoidng that overheads, other OS have supported MADV_FREE.
> The idea is just mark pages as lazyfree when madvise called
> and purge them if memory pressure happens. Otherwise, VM doesn't
> detach pages on the address space so application could use
> that memory space without above overheads.

I didn't look into the code. Does this mean we just keep the vma,
the pte entries, and page itself for later possible reuse? If so,
how can we reuse the vma? The kernel would mark the vma kinds of
special so that it can be reused other than unmapped? Do you have
an example about this reuse?

Another thing is when I search MADV_FREE in the internet, I see that
Rik posted the similar patch in 2007 but that patch didn't
go into the upstream kernel.  And some explanation from Andrew:

------------------------------------------------------
 lazy-freeing-of-memory-through-madv_free.patch

 lazy-freeing-of-memory-through-madv_free-vs-mm-madvise-avoid-exclusive-mmap_sem.patch

 restore-madv_dontneed-to-its-original-linux-behaviour.patch



I think the MADV_FREE changes need more work:



We need crystal-clear statements regarding the present functionality, the new

functionality and how these relate to the spec and to implmentations in other

OS'es.  Once we have that info we are in a position to work out whether the

code can be merged as-is, or if additional changes are needed.



Because right now, I don't know where we are with respect to these things and

I doubt if many of our users know either.  How can Michael write a manpage for

this is we don't tell him what it all does?
------------------------------------------------------

Thanks
Zhang Yanfei

> 
> I tweaked jamalloc to use MADV_FREE for the testing.
> 
> diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
> index 8a42e75..20e31af 100644
> --- a/src/chunk_mmap.c
> +++ b/src/chunk_mmap.c
> @@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
>  #  else
>  #    error "No method defined for purging unused dirty pages."
>  #  endif
> -       int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
> +       int err = madvise(addr, length, 5);
>         unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
>  #  undef JEMALLOC_MADV_PURGE
>  #  undef JEMALLOC_MADV_ZEROS
> 
> 
> RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)
> 
> (1.1) stands for 1 process and 1 thread so for exmaple,
> (1.4) is 1 process and 4 thread.
> 
> vanilla jemalloc	 patched jemalloc
> 
> 1.1       1.1
> records:  5              records:  5
> avg:      7404.60        avg:      14059.80
> std:      116.67(1.58%)  std:      93.92(0.67%)
> max:      7564.00        max:      14152.00
> min:      7288.00        min:      13893.00
> 1.4       1.4
> records:  5              records:  5
> avg:      16160.80       avg:      30173.00
> std:      509.80(3.15%)  std:      3050.72(10.11%)
> max:      16728.00       max:      33989.00
> min:      15216.00       min:      25173.00
> 1.8       1.8
> records:  5              records:  5
> avg:      16003.00       avg:      30080.20
> std:      290.40(1.81%)  std:      2063.57(6.86%)
> max:      16537.00       max:      32735.00
> min:      15727.00       min:      27381.00
> 4.1       4.1
> records:  5              records:  5
> avg:      4003.60        avg:      8064.80
> std:      65.33(1.63%)   std:      143.89(1.78%)
> max:      4118.00        max:      8319.00
> min:      3921.00        min:      7888.00
> 4.4       4.4
> records:  5              records:  5
> avg:      3907.40        avg:      7199.80
> std:      48.68(1.25%)   std:      80.21(1.11%)
> max:      3997.00        max:      7320.00
> min:      3863.00        min:      7113.00
> 4.8       4.8
> records:  5              records:  5
> avg:      3893.00        avg:      7195.20
> std:      19.11(0.49%)   std:      101.55(1.41%)
> max:      3927.00        max:      7309.00
> min:      3869.00        min:      7012.00
> 8.1       8.1
> records:  5              records:  5
> avg:      1942.00        avg:      3602.80
> std:      34.60(1.78%)   std:      22.97(0.64%)
> max:      2010.00        max:      3632.00
> min:      1913.00        min:      3563.00
> 8.4       8.4
> records:  5              records:  5
> avg:      1938.00        avg:      3405.60
> std:      32.77(1.69%)   std:      36.25(1.06%)
> max:      1998.00        max:      3468.00
> min:      1905.00        min:      3374.00
> 8.8       8.8
> records:  5              records:  5
> avg:      1977.80        avg:      3434.20
> std:      25.75(1.30%)   std:      57.95(1.69%)
> max:      2011.00        max:      3533.00
> min:      1937.00        min:      3363.00
> 
> So, MADV_FREE is 2 time faster than MADV_DONTNEED for
> every cases.
> 
> I didn't test a lot but it's enough to show the concept and
> direction before LSF/MM.
> 
> Patchset is based on 3.14-rc6.
> 
> Welcome any comment!
> 
> Minchan Kim (6):
>   mm: clean up PAGE_MAPPING_FLAGS
>   mm: work deactivate_page with anon pages
>   mm: support madvise(MADV_FREE)
>   mm: add stat about lazyfree pages
>   mm: reclaim lazyfree pages in swapless system
>   mm: ksm: don't merge lazyfree page
> 
>  include/asm-generic/tlb.h              |  9 ++++++++
>  include/linux/mm.h                     | 39 +++++++++++++++++++++++++++++++++-
>  include/linux/mm_inline.h              |  9 ++++++++
>  include/linux/mmzone.h                 |  1 +
>  include/linux/rmap.h                   |  1 +
>  include/linux/swap.h                   | 15 +++++++++++++
>  include/linux/vm_event_item.h          |  1 +
>  include/uapi/asm-generic/mman-common.h |  1 +
>  mm/ksm.c                               | 18 +++++++++++-----
>  mm/madvise.c                           | 17 +++++++++++++--
>  mm/memory.c                            | 12 ++++++++++-
>  mm/page_alloc.c                        |  5 ++++-
>  mm/rmap.c                              | 25 ++++++++++++++++++----
>  mm/swap.c                              | 20 ++++++++---------
>  mm/swap_state.c                        | 38 ++++++++++++++++++++++++++++++++-
>  mm/vmscan.c                            | 32 +++++++++++++++++++++++++---
>  mm/vmstat.c                            |  2 ++
>  17 files changed, 217 insertions(+), 28 deletions(-)
> 


-- 
Thanks.
Zhang Yanfei

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
@ 2014-03-14  7:37   ` Zhang Yanfei
  0 siblings, 0 replies; 35+ messages in thread
From: Zhang Yanfei @ 2014-03-14  7:37 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

Hello Minchan

On 03/14/2014 02:37 PM, Minchan Kim wrote:
> This patch is an attempt to support MADV_FREE for Linux.
> 
> Rationale is following as.
> 
> Allocators call munmap(2) when user call free(3) if ptr is
> in mmaped area. But munmap isn't cheap because it have to clean up
> all pte entries, unlinking a vma and returns free pages to buddy
> so overhead would be increased linearly by mmaped area's size.
> So they like madvise_dontneed rather than munmap.
> 
> "dontneed" holds read-side lock of mmap_sem so other threads
> of the process could go with concurrent page faults so it is
> better than munmap if it's not lack of address space.
> But the problem is that most of allocator reuses that address
> space soonish so applications see page fault, page allocation,
> page zeroing if allocator already called madvise_dontneed
> on the address space.
> 
> For avoidng that overheads, other OS have supported MADV_FREE.
> The idea is just mark pages as lazyfree when madvise called
> and purge them if memory pressure happens. Otherwise, VM doesn't
> detach pages on the address space so application could use
> that memory space without above overheads.

I didn't look into the code. Does this mean we just keep the vma,
the pte entries, and page itself for later possible reuse? If so,
how can we reuse the vma? The kernel would mark the vma kinds of
special so that it can be reused other than unmapped? Do you have
an example about this reuse?

Another thing is when I search MADV_FREE in the internet, I see that
Rik posted the similar patch in 2007 but that patch didn't
go into the upstream kernel.  And some explanation from Andrew:

------------------------------------------------------
 lazy-freeing-of-memory-through-madv_free.patch

 lazy-freeing-of-memory-through-madv_free-vs-mm-madvise-avoid-exclusive-mmap_sem.patch

 restore-madv_dontneed-to-its-original-linux-behaviour.patch



I think the MADV_FREE changes need more work:



We need crystal-clear statements regarding the present functionality, the new

functionality and how these relate to the spec and to implmentations in other

OS'es.  Once we have that info we are in a position to work out whether the

code can be merged as-is, or if additional changes are needed.



Because right now, I don't know where we are with respect to these things and

I doubt if many of our users know either.  How can Michael write a manpage for

this is we don't tell him what it all does?
------------------------------------------------------

Thanks
Zhang Yanfei

> 
> I tweaked jamalloc to use MADV_FREE for the testing.
> 
> diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
> index 8a42e75..20e31af 100644
> --- a/src/chunk_mmap.c
> +++ b/src/chunk_mmap.c
> @@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
>  #  else
>  #    error "No method defined for purging unused dirty pages."
>  #  endif
> -       int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
> +       int err = madvise(addr, length, 5);
>         unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
>  #  undef JEMALLOC_MADV_PURGE
>  #  undef JEMALLOC_MADV_ZEROS
> 
> 
> RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)
> 
> (1.1) stands for 1 process and 1 thread so for exmaple,
> (1.4) is 1 process and 4 thread.
> 
> vanilla jemalloc	 patched jemalloc
> 
> 1.1       1.1
> records:  5              records:  5
> avg:      7404.60        avg:      14059.80
> std:      116.67(1.58%)  std:      93.92(0.67%)
> max:      7564.00        max:      14152.00
> min:      7288.00        min:      13893.00
> 1.4       1.4
> records:  5              records:  5
> avg:      16160.80       avg:      30173.00
> std:      509.80(3.15%)  std:      3050.72(10.11%)
> max:      16728.00       max:      33989.00
> min:      15216.00       min:      25173.00
> 1.8       1.8
> records:  5              records:  5
> avg:      16003.00       avg:      30080.20
> std:      290.40(1.81%)  std:      2063.57(6.86%)
> max:      16537.00       max:      32735.00
> min:      15727.00       min:      27381.00
> 4.1       4.1
> records:  5              records:  5
> avg:      4003.60        avg:      8064.80
> std:      65.33(1.63%)   std:      143.89(1.78%)
> max:      4118.00        max:      8319.00
> min:      3921.00        min:      7888.00
> 4.4       4.4
> records:  5              records:  5
> avg:      3907.40        avg:      7199.80
> std:      48.68(1.25%)   std:      80.21(1.11%)
> max:      3997.00        max:      7320.00
> min:      3863.00        min:      7113.00
> 4.8       4.8
> records:  5              records:  5
> avg:      3893.00        avg:      7195.20
> std:      19.11(0.49%)   std:      101.55(1.41%)
> max:      3927.00        max:      7309.00
> min:      3869.00        min:      7012.00
> 8.1       8.1
> records:  5              records:  5
> avg:      1942.00        avg:      3602.80
> std:      34.60(1.78%)   std:      22.97(0.64%)
> max:      2010.00        max:      3632.00
> min:      1913.00        min:      3563.00
> 8.4       8.4
> records:  5              records:  5
> avg:      1938.00        avg:      3405.60
> std:      32.77(1.69%)   std:      36.25(1.06%)
> max:      1998.00        max:      3468.00
> min:      1905.00        min:      3374.00
> 8.8       8.8
> records:  5              records:  5
> avg:      1977.80        avg:      3434.20
> std:      25.75(1.30%)   std:      57.95(1.69%)
> max:      2011.00        max:      3533.00
> min:      1937.00        min:      3363.00
> 
> So, MADV_FREE is 2 time faster than MADV_DONTNEED for
> every cases.
> 
> I didn't test a lot but it's enough to show the concept and
> direction before LSF/MM.
> 
> Patchset is based on 3.14-rc6.
> 
> Welcome any comment!
> 
> Minchan Kim (6):
>   mm: clean up PAGE_MAPPING_FLAGS
>   mm: work deactivate_page with anon pages
>   mm: support madvise(MADV_FREE)
>   mm: add stat about lazyfree pages
>   mm: reclaim lazyfree pages in swapless system
>   mm: ksm: don't merge lazyfree page
> 
>  include/asm-generic/tlb.h              |  9 ++++++++
>  include/linux/mm.h                     | 39 +++++++++++++++++++++++++++++++++-
>  include/linux/mm_inline.h              |  9 ++++++++
>  include/linux/mmzone.h                 |  1 +
>  include/linux/rmap.h                   |  1 +
>  include/linux/swap.h                   | 15 +++++++++++++
>  include/linux/vm_event_item.h          |  1 +
>  include/uapi/asm-generic/mman-common.h |  1 +
>  mm/ksm.c                               | 18 +++++++++++-----
>  mm/madvise.c                           | 17 +++++++++++++--
>  mm/memory.c                            | 12 ++++++++++-
>  mm/page_alloc.c                        |  5 ++++-
>  mm/rmap.c                              | 25 ++++++++++++++++++----
>  mm/swap.c                              | 20 ++++++++---------
>  mm/swap_state.c                        | 38 ++++++++++++++++++++++++++++++++-
>  mm/vmscan.c                            | 32 +++++++++++++++++++++++++---
>  mm/vmstat.c                            |  2 ++
>  17 files changed, 217 insertions(+), 28 deletions(-)
> 


-- 
Thanks.
Zhang Yanfei

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 3/6] mm: support madvise(MADV_FREE)
  2014-03-14  6:37   ` Minchan Kim
@ 2014-03-14  7:49     ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  7:49 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans

On Fri, Mar 14, 2014 at 03:37:47PM +0900, Minchan Kim wrote:
> Linux doesn't have an ability to free pages lazy while other OS
> already have been supported that named by madvise(MADV_FREE).
> 
> The gain is clear that kernel can evict freed pages rather than
> swapping out or OOM if memory pressure happens.
> 
> Without memory pressure, freed pages would be reused by userspace
> without another additional overhead(ex, page fault + + page allocation
> + page zeroing).
> 
> Firstly, heavy users would be general allocators(ex, jemalloc,
> I hope ptmalloc support it) and jemalloc already have supported
> the feature for other OS(ex, FreeBSD)
> 
> At the moment, this patch would break build other ARCHs which have
> own TLB flush scheme other than that x86 but if there is no objection
> in this direction, I will add patches for handling other ARCHs
> in next iteration.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  include/asm-generic/tlb.h              |  9 ++++++++
>  include/linux/mm.h                     | 35 ++++++++++++++++++++++++++++++-
>  include/linux/rmap.h                   |  1 +
>  include/linux/swap.h                   | 15 ++++++++++++++
>  include/uapi/asm-generic/mman-common.h |  1 +
>  mm/madvise.c                           | 17 +++++++++++++--
>  mm/memory.c                            | 12 ++++++++++-
>  mm/rmap.c                              | 21 +++++++++++++++++--
>  mm/swap_state.c                        | 38 +++++++++++++++++++++++++++++++++-
>  mm/vmscan.c                            | 22 +++++++++++++++++++-
>  10 files changed, 163 insertions(+), 8 deletions(-)
> 
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 5672d7ea1fa0..b82ee729a065 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -116,8 +116,17 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long
>  void tlb_flush_mmu(struct mmu_gather *tlb);
>  void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start,
>  							unsigned long end);
> +int __tlb_madvfree_page(struct mmu_gather *tlb, struct page *page);
>  int __tlb_remove_page(struct mmu_gather *tlb, struct page *page);
>  
> +static inline void tlb_madvfree_page(struct mmu_gather *tlb, struct page *page)
> +{
> +	/* Prevent page free */
> +	get_page(page);
> +	if (!__tlb_remove_page(tlb, MarkLazyFree(page)))
> +		tlb_flush_mmu(tlb);
> +}
> +
>  /* tlb_remove_page
>   *	Similar to __tlb_remove_page but will call tlb_flush_mmu() itself when
>   *	required.
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c1b7414c7bef..9b048cabce27 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -933,10 +933,16 @@ void page_address_init(void);
>   * Please note that, confusingly, "page_mapping" refers to the inode
>   * address_space which maps the page from disk; whereas "page_mapped"
>   * refers to user virtual address space into which the page is mapped.
> + *
> + * PAGE_MAPPING_LZFREE bit is set along with PAGE_MAPPING_ANON bit
> + * and then page->mapping points to an anon_vma. This flag is used
> + * for lazy freeing the page instead of swap.
>   */
>  #define PAGE_MAPPING_ANON	1
>  #define PAGE_MAPPING_KSM	2
> -#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
> +#define PAGE_MAPPING_LZFREE	4
> +#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM | \
> +				 PAGE_MAPPING_LZFREE)
>  
>  extern struct address_space *page_mapping(struct page *page);
>  
> @@ -962,6 +968,32 @@ static inline int PageAnon(struct page *page)
>  	return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0;
>  }
>  
> +static inline void SetPageLazyFree(struct page *page)
> +{
> +	BUG_ON(!PageAnon(page));
> +	BUG_ON(!PageLocked(page));
> +
> +	page->mapping = (void *)((unsigned long)page->mapping |
> +			PAGE_MAPPING_LZFREE);
> +}
> +
> +static inline void ClearPageLazyFree(struct page *page)
> +{
> +	BUG_ON(!PageAnon(page));
> +	BUG_ON(!PageLocked(page));
> +
> +	page->mapping = (void *)((unsigned long)page->mapping &
> +				~PAGE_MAPPING_LZFREE);
> +}
> +
> +static inline int PageLazyFree(struct page *page)
> +{
> +	if (((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
> +			(PAGE_MAPPING_ANON|PAGE_MAPPING_LZFREE))
> +		return 1;
> +	return 0;
> +}
> +
>  /*
>   * Return the pagecache index of the passed page.  Regular pagecache pages
>   * use ->index whereas swapcache pages use ->private
> @@ -1054,6 +1086,7 @@ struct zap_details {
>  	struct address_space *check_mapping;	/* Check page->mapping if set */
>  	pgoff_t	first_index;			/* Lowest page->index to unmap */
>  	pgoff_t last_index;			/* Highest page->index to unmap */
> +	int lazy_free;				/* do lazy free */
>  };
>  
>  struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index 1da693d51255..19e74aebb3d5 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -75,6 +75,7 @@ enum ttu_flags {
>  	TTU_UNMAP = 0,			/* unmap mode */
>  	TTU_MIGRATION = 1,		/* migration mode */
>  	TTU_MUNLOCK = 2,		/* munlock mode */
> +	TTU_LAZYFREE  = 3,		/* free lazyfree page */
>  	TTU_ACTION_MASK = 0xff,
>  
>  	TTU_IGNORE_MLOCK = (1 << 8),	/* ignore mlock */
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 46ba0c6c219f..223909c14703 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -13,6 +13,21 @@
>  #include <linux/page-flags.h>
>  #include <asm/page.h>
>  
> +static inline struct page *MarkLazyFree(struct page *p)
> +{
> +	return (struct page *)((unsigned long)p | 0x1UL);
> +}
> +
> +static inline struct page *ClearLazyFree(struct page *p)
> +{
> +	return (struct page *)((unsigned long)p & ~0x1UL);
> +}
> +
> +static inline bool LazyFree(struct page *p)
> +{
> +	return ((unsigned long)p & 0x1UL) ? true : false;
> +}
> +
>  struct notifier_block;
>  
>  struct bio;
> diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
> index 4164529a94f9..7e257e49be2e 100644
> --- a/include/uapi/asm-generic/mman-common.h
> +++ b/include/uapi/asm-generic/mman-common.h
> @@ -34,6 +34,7 @@
>  #define MADV_SEQUENTIAL	2		/* expect sequential page references */
>  #define MADV_WILLNEED	3		/* will need these pages */
>  #define MADV_DONTNEED	4		/* don't need these pages */
> +#define MADV_FREE	5		/* do lazy free */
>  
>  /* common parameters: try to keep these consistent across architectures */
>  #define MADV_REMOVE	9		/* remove these pages & resources */
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 539eeb96b323..2e904289a2bb 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -31,6 +31,7 @@ static int madvise_need_mmap_write(int behavior)
>  	case MADV_REMOVE:
>  	case MADV_WILLNEED:
>  	case MADV_DONTNEED:
> +	case MADV_FREE:
>  		return 0;
>  	default:
>  		/* be safe, default to 1. list exceptions explicitly */
> @@ -272,7 +273,8 @@ static long madvise_willneed(struct vm_area_struct *vma,
>   */
>  static long madvise_dontneed(struct vm_area_struct *vma,
>  			     struct vm_area_struct **prev,
> -			     unsigned long start, unsigned long end)
> +			     unsigned long start, unsigned long end,
> +			     int behavior)
>  {
>  	*prev = vma;
>  	if (vma->vm_flags & (VM_LOCKED|VM_HUGETLB|VM_PFNMAP))
> @@ -284,8 +286,17 @@ static long madvise_dontneed(struct vm_area_struct *vma,
>  			.last_index = ULONG_MAX,
>  		};
>  		zap_page_range(vma, start, end - start, &details);
> +	} else if (behavior == MADV_FREE) {
> +		struct zap_details details = {
> +			.lazy_free = 1,
> +		};
> +
> +		if (vma->vm_file)
> +			return -EINVAL;
> +		zap_page_range(vma, start, end - start, &details);
>  	} else
>  		zap_page_range(vma, start, end - start, NULL);
> +
>  	return 0;
>  }
>  
> @@ -384,8 +395,9 @@ madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev,
>  		return madvise_remove(vma, prev, start, end);
>  	case MADV_WILLNEED:
>  		return madvise_willneed(vma, prev, start, end);
> +	case MADV_FREE:
>  	case MADV_DONTNEED:
> -		return madvise_dontneed(vma, prev, start, end);
> +		return madvise_dontneed(vma, prev, start, end, behavior);
>  	default:
>  		return madvise_behavior(vma, prev, start, end, behavior);
>  	}
> @@ -403,6 +415,7 @@ madvise_behavior_valid(int behavior)
>  	case MADV_REMOVE:
>  	case MADV_WILLNEED:
>  	case MADV_DONTNEED:
> +	case MADV_FREE:
>  #ifdef CONFIG_KSM
>  	case MADV_MERGEABLE:
>  	case MADV_UNMERGEABLE:
> diff --git a/mm/memory.c b/mm/memory.c
> index 22dfa617bddb..f1f0dc13e8d1 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1093,6 +1093,15 @@ again:
>  
>  			page = vm_normal_page(vma, addr, ptent);
>  			if (unlikely(details) && page) {
> +				if (details->lazy_free && PageAnon(page)) {
> +					ptent = pte_mkold(ptent);
> +					ptent = pte_mkclean(ptent);
> +					set_pte_at(mm, addr, pte, ptent);
> +					tlb_remove_tlb_entry(tlb, pte, addr);
> +					tlb_madvfree_page(tlb, page);
> +					continue;
> +				}
> +
>  				/*
>  				 * unmap_shared_mapping_pages() wants to
>  				 * invalidate cache without truncating:
> @@ -1276,7 +1285,8 @@ static void unmap_page_range(struct mmu_gather *tlb,
>  	pgd_t *pgd;
>  	unsigned long next;
>  
> -	if (details && !details->check_mapping && !details->nonlinear_vma)
> +	if (details && !details->check_mapping && !details->nonlinear_vma &&
> +		!details->lazy_free)
>  		details = NULL;
>  
>  	BUG_ON(addr >= end);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 76069afa6b81..7712f39acfee 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -377,6 +377,15 @@ void __init anon_vma_init(void)
>  	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
>  }
>  
> +static inline bool is_anon_vma(unsigned long mapping)
> +{
> +	unsigned long anon_mapping = mapping & PAGE_MAPPING_FLAGS;
> +	if ((anon_mapping != PAGE_MAPPING_ANON) &&
> +	    (anon_mapping != (PAGE_MAPPING_ANON|PAGE_MAPPING_LZFREE)))
> +		return false;
> +	return true;
> +}
> +
>  /*
>   * Getting a lock on a stable anon_vma from a page off the LRU is tricky!
>   *
> @@ -407,7 +416,7 @@ struct anon_vma *page_get_anon_vma(struct page *page)
>  
>  	rcu_read_lock();
>  	anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
> -	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> +	if (!is_anon_vma(anon_mapping))
>  		goto out;
>  	if (!page_mapped(page))
>  		goto out;
> @@ -450,7 +459,7 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page)
>  
>  	rcu_read_lock();
>  	anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
> -	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> +	if (!is_anon_vma(anon_mapping))
>  		goto out;
>  	if (!page_mapped(page))
>  		goto out;
> @@ -1165,6 +1174,14 @@ int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  		}
>  		set_pte_at(mm, address, pte,
>  			   swp_entry_to_pte(make_hwpoison_entry(page)));
> +	} else if ((flags & TTU_LAZYFREE) && PageLazyFree(page)) {
> +		BUG_ON(!PageAnon(page));
> +		if (unlikely(pte_dirty(pteval))) {
> +			set_pte_at(mm, address, pte, pteval);
> +			ret = SWAP_FAIL;
> +			goto out_unmap;
> +		}
> +		dec_mm_counter(mm, MM_ANONPAGES);
>  	} else if (PageAnon(page)) {
>  		swp_entry_t entry = { .val = page_private(page) };
>  		pte_t swp_pte;
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index e76ace30d436..0718ecd166dc 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -18,6 +18,7 @@
>  #include <linux/pagevec.h>
>  #include <linux/migrate.h>
>  #include <linux/page_cgroup.h>
> +#include <linux/ksm.h>
>  
>  #include <asm/pgtable.h>
>  
> @@ -256,8 +257,36 @@ void free_page_and_swap_cache(struct page *page)
>  }
>  
>  /*
> + * move @page to inactive LRU's tail so that VM can discard it
> + * rather than swapping hot pages out when memory pressure happens.
> + */
> +static bool move_lazyfree(struct page *page)
> +{
> +	if (!trylock_page(page))
> +		return false;
> +
> +	if (PageKsm(page)) {
> +		unlock_page(page);
> +		return false;
> +	}
> +
> +	if (PageSwapCache(page) &&
> +			try_to_free_swap(page))
> +		ClearPageDirty(page);
> +
> +	if (!PageLazyFree(page)) {
> +		SetPageLazyFree(page);
> +		deactivate_page(page);
> +	}
> +
> +	unlock_page(page);
> +	return true;
> +}
> +
> +/*
>   * Passed an array of pages, drop them all from swapcache and then release
>   * them.  They are removed from the LRU and freed if this is their last use.
> + * If page passed are lazyfree, deactivate them intead of freeing.
>   */
>  void free_pages_and_swap_cache(struct page **pages, int nr)
>  {
> @@ -269,7 +298,14 @@ void free_pages_and_swap_cache(struct page **pages, int nr)
>  		int i;
>  
>  		for (i = 0; i < todo; i++)
> -			free_swap_cache(pagep[i]);
> +			if (LazyFree(pagep[i])) {
> +				pagep[i] = ClearLazyFree(pagep[i]);
> +				/* If we failed, just free */
> +				if (!move_lazyfree(pagep[i]))
> +					free_swap_cache(pagep[i]);

Oops, patchset was confused by older version in my git tree.
Fix goes.


diff --git a/mm/swap_state.c b/mm/swap_state.c
index 0718ecd166dc..882f1c8e5bd2 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -300,9 +300,7 @@ void free_pages_and_swap_cache(struct page **pages, int nr)
 		for (i = 0; i < todo; i++)
 			if (LazyFree(pagep[i])) {
 				pagep[i] = ClearLazyFree(pagep[i]);
-				/* If we failed, just free */
-				if (!move_lazyfree(pagep[i]))
-					free_swap_cache(pagep[i]);
+				move_lazyfree(pagep[i]);
 			} else {
 				free_swap_cache(pagep[i]);
 			}

-- 
Kind regards,
Minchan Kim

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [RFC 3/6] mm: support madvise(MADV_FREE)
@ 2014-03-14  7:49     ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  7:49 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans

On Fri, Mar 14, 2014 at 03:37:47PM +0900, Minchan Kim wrote:
> Linux doesn't have an ability to free pages lazy while other OS
> already have been supported that named by madvise(MADV_FREE).
> 
> The gain is clear that kernel can evict freed pages rather than
> swapping out or OOM if memory pressure happens.
> 
> Without memory pressure, freed pages would be reused by userspace
> without another additional overhead(ex, page fault + + page allocation
> + page zeroing).
> 
> Firstly, heavy users would be general allocators(ex, jemalloc,
> I hope ptmalloc support it) and jemalloc already have supported
> the feature for other OS(ex, FreeBSD)
> 
> At the moment, this patch would break build other ARCHs which have
> own TLB flush scheme other than that x86 but if there is no objection
> in this direction, I will add patches for handling other ARCHs
> in next iteration.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  include/asm-generic/tlb.h              |  9 ++++++++
>  include/linux/mm.h                     | 35 ++++++++++++++++++++++++++++++-
>  include/linux/rmap.h                   |  1 +
>  include/linux/swap.h                   | 15 ++++++++++++++
>  include/uapi/asm-generic/mman-common.h |  1 +
>  mm/madvise.c                           | 17 +++++++++++++--
>  mm/memory.c                            | 12 ++++++++++-
>  mm/rmap.c                              | 21 +++++++++++++++++--
>  mm/swap_state.c                        | 38 +++++++++++++++++++++++++++++++++-
>  mm/vmscan.c                            | 22 +++++++++++++++++++-
>  10 files changed, 163 insertions(+), 8 deletions(-)
> 
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 5672d7ea1fa0..b82ee729a065 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -116,8 +116,17 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long
>  void tlb_flush_mmu(struct mmu_gather *tlb);
>  void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start,
>  							unsigned long end);
> +int __tlb_madvfree_page(struct mmu_gather *tlb, struct page *page);
>  int __tlb_remove_page(struct mmu_gather *tlb, struct page *page);
>  
> +static inline void tlb_madvfree_page(struct mmu_gather *tlb, struct page *page)
> +{
> +	/* Prevent page free */
> +	get_page(page);
> +	if (!__tlb_remove_page(tlb, MarkLazyFree(page)))
> +		tlb_flush_mmu(tlb);
> +}
> +
>  /* tlb_remove_page
>   *	Similar to __tlb_remove_page but will call tlb_flush_mmu() itself when
>   *	required.
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c1b7414c7bef..9b048cabce27 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -933,10 +933,16 @@ void page_address_init(void);
>   * Please note that, confusingly, "page_mapping" refers to the inode
>   * address_space which maps the page from disk; whereas "page_mapped"
>   * refers to user virtual address space into which the page is mapped.
> + *
> + * PAGE_MAPPING_LZFREE bit is set along with PAGE_MAPPING_ANON bit
> + * and then page->mapping points to an anon_vma. This flag is used
> + * for lazy freeing the page instead of swap.
>   */
>  #define PAGE_MAPPING_ANON	1
>  #define PAGE_MAPPING_KSM	2
> -#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
> +#define PAGE_MAPPING_LZFREE	4
> +#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM | \
> +				 PAGE_MAPPING_LZFREE)
>  
>  extern struct address_space *page_mapping(struct page *page);
>  
> @@ -962,6 +968,32 @@ static inline int PageAnon(struct page *page)
>  	return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0;
>  }
>  
> +static inline void SetPageLazyFree(struct page *page)
> +{
> +	BUG_ON(!PageAnon(page));
> +	BUG_ON(!PageLocked(page));
> +
> +	page->mapping = (void *)((unsigned long)page->mapping |
> +			PAGE_MAPPING_LZFREE);
> +}
> +
> +static inline void ClearPageLazyFree(struct page *page)
> +{
> +	BUG_ON(!PageAnon(page));
> +	BUG_ON(!PageLocked(page));
> +
> +	page->mapping = (void *)((unsigned long)page->mapping &
> +				~PAGE_MAPPING_LZFREE);
> +}
> +
> +static inline int PageLazyFree(struct page *page)
> +{
> +	if (((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
> +			(PAGE_MAPPING_ANON|PAGE_MAPPING_LZFREE))
> +		return 1;
> +	return 0;
> +}
> +
>  /*
>   * Return the pagecache index of the passed page.  Regular pagecache pages
>   * use ->index whereas swapcache pages use ->private
> @@ -1054,6 +1086,7 @@ struct zap_details {
>  	struct address_space *check_mapping;	/* Check page->mapping if set */
>  	pgoff_t	first_index;			/* Lowest page->index to unmap */
>  	pgoff_t last_index;			/* Highest page->index to unmap */
> +	int lazy_free;				/* do lazy free */
>  };
>  
>  struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index 1da693d51255..19e74aebb3d5 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -75,6 +75,7 @@ enum ttu_flags {
>  	TTU_UNMAP = 0,			/* unmap mode */
>  	TTU_MIGRATION = 1,		/* migration mode */
>  	TTU_MUNLOCK = 2,		/* munlock mode */
> +	TTU_LAZYFREE  = 3,		/* free lazyfree page */
>  	TTU_ACTION_MASK = 0xff,
>  
>  	TTU_IGNORE_MLOCK = (1 << 8),	/* ignore mlock */
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 46ba0c6c219f..223909c14703 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -13,6 +13,21 @@
>  #include <linux/page-flags.h>
>  #include <asm/page.h>
>  
> +static inline struct page *MarkLazyFree(struct page *p)
> +{
> +	return (struct page *)((unsigned long)p | 0x1UL);
> +}
> +
> +static inline struct page *ClearLazyFree(struct page *p)
> +{
> +	return (struct page *)((unsigned long)p & ~0x1UL);
> +}
> +
> +static inline bool LazyFree(struct page *p)
> +{
> +	return ((unsigned long)p & 0x1UL) ? true : false;
> +}
> +
>  struct notifier_block;
>  
>  struct bio;
> diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
> index 4164529a94f9..7e257e49be2e 100644
> --- a/include/uapi/asm-generic/mman-common.h
> +++ b/include/uapi/asm-generic/mman-common.h
> @@ -34,6 +34,7 @@
>  #define MADV_SEQUENTIAL	2		/* expect sequential page references */
>  #define MADV_WILLNEED	3		/* will need these pages */
>  #define MADV_DONTNEED	4		/* don't need these pages */
> +#define MADV_FREE	5		/* do lazy free */
>  
>  /* common parameters: try to keep these consistent across architectures */
>  #define MADV_REMOVE	9		/* remove these pages & resources */
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 539eeb96b323..2e904289a2bb 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -31,6 +31,7 @@ static int madvise_need_mmap_write(int behavior)
>  	case MADV_REMOVE:
>  	case MADV_WILLNEED:
>  	case MADV_DONTNEED:
> +	case MADV_FREE:
>  		return 0;
>  	default:
>  		/* be safe, default to 1. list exceptions explicitly */
> @@ -272,7 +273,8 @@ static long madvise_willneed(struct vm_area_struct *vma,
>   */
>  static long madvise_dontneed(struct vm_area_struct *vma,
>  			     struct vm_area_struct **prev,
> -			     unsigned long start, unsigned long end)
> +			     unsigned long start, unsigned long end,
> +			     int behavior)
>  {
>  	*prev = vma;
>  	if (vma->vm_flags & (VM_LOCKED|VM_HUGETLB|VM_PFNMAP))
> @@ -284,8 +286,17 @@ static long madvise_dontneed(struct vm_area_struct *vma,
>  			.last_index = ULONG_MAX,
>  		};
>  		zap_page_range(vma, start, end - start, &details);
> +	} else if (behavior == MADV_FREE) {
> +		struct zap_details details = {
> +			.lazy_free = 1,
> +		};
> +
> +		if (vma->vm_file)
> +			return -EINVAL;
> +		zap_page_range(vma, start, end - start, &details);
>  	} else
>  		zap_page_range(vma, start, end - start, NULL);
> +
>  	return 0;
>  }
>  
> @@ -384,8 +395,9 @@ madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev,
>  		return madvise_remove(vma, prev, start, end);
>  	case MADV_WILLNEED:
>  		return madvise_willneed(vma, prev, start, end);
> +	case MADV_FREE:
>  	case MADV_DONTNEED:
> -		return madvise_dontneed(vma, prev, start, end);
> +		return madvise_dontneed(vma, prev, start, end, behavior);
>  	default:
>  		return madvise_behavior(vma, prev, start, end, behavior);
>  	}
> @@ -403,6 +415,7 @@ madvise_behavior_valid(int behavior)
>  	case MADV_REMOVE:
>  	case MADV_WILLNEED:
>  	case MADV_DONTNEED:
> +	case MADV_FREE:
>  #ifdef CONFIG_KSM
>  	case MADV_MERGEABLE:
>  	case MADV_UNMERGEABLE:
> diff --git a/mm/memory.c b/mm/memory.c
> index 22dfa617bddb..f1f0dc13e8d1 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1093,6 +1093,15 @@ again:
>  
>  			page = vm_normal_page(vma, addr, ptent);
>  			if (unlikely(details) && page) {
> +				if (details->lazy_free && PageAnon(page)) {
> +					ptent = pte_mkold(ptent);
> +					ptent = pte_mkclean(ptent);
> +					set_pte_at(mm, addr, pte, ptent);
> +					tlb_remove_tlb_entry(tlb, pte, addr);
> +					tlb_madvfree_page(tlb, page);
> +					continue;
> +				}
> +
>  				/*
>  				 * unmap_shared_mapping_pages() wants to
>  				 * invalidate cache without truncating:
> @@ -1276,7 +1285,8 @@ static void unmap_page_range(struct mmu_gather *tlb,
>  	pgd_t *pgd;
>  	unsigned long next;
>  
> -	if (details && !details->check_mapping && !details->nonlinear_vma)
> +	if (details && !details->check_mapping && !details->nonlinear_vma &&
> +		!details->lazy_free)
>  		details = NULL;
>  
>  	BUG_ON(addr >= end);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 76069afa6b81..7712f39acfee 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -377,6 +377,15 @@ void __init anon_vma_init(void)
>  	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
>  }
>  
> +static inline bool is_anon_vma(unsigned long mapping)
> +{
> +	unsigned long anon_mapping = mapping & PAGE_MAPPING_FLAGS;
> +	if ((anon_mapping != PAGE_MAPPING_ANON) &&
> +	    (anon_mapping != (PAGE_MAPPING_ANON|PAGE_MAPPING_LZFREE)))
> +		return false;
> +	return true;
> +}
> +
>  /*
>   * Getting a lock on a stable anon_vma from a page off the LRU is tricky!
>   *
> @@ -407,7 +416,7 @@ struct anon_vma *page_get_anon_vma(struct page *page)
>  
>  	rcu_read_lock();
>  	anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
> -	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> +	if (!is_anon_vma(anon_mapping))
>  		goto out;
>  	if (!page_mapped(page))
>  		goto out;
> @@ -450,7 +459,7 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page)
>  
>  	rcu_read_lock();
>  	anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
> -	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> +	if (!is_anon_vma(anon_mapping))
>  		goto out;
>  	if (!page_mapped(page))
>  		goto out;
> @@ -1165,6 +1174,14 @@ int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  		}
>  		set_pte_at(mm, address, pte,
>  			   swp_entry_to_pte(make_hwpoison_entry(page)));
> +	} else if ((flags & TTU_LAZYFREE) && PageLazyFree(page)) {
> +		BUG_ON(!PageAnon(page));
> +		if (unlikely(pte_dirty(pteval))) {
> +			set_pte_at(mm, address, pte, pteval);
> +			ret = SWAP_FAIL;
> +			goto out_unmap;
> +		}
> +		dec_mm_counter(mm, MM_ANONPAGES);
>  	} else if (PageAnon(page)) {
>  		swp_entry_t entry = { .val = page_private(page) };
>  		pte_t swp_pte;
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index e76ace30d436..0718ecd166dc 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -18,6 +18,7 @@
>  #include <linux/pagevec.h>
>  #include <linux/migrate.h>
>  #include <linux/page_cgroup.h>
> +#include <linux/ksm.h>
>  
>  #include <asm/pgtable.h>
>  
> @@ -256,8 +257,36 @@ void free_page_and_swap_cache(struct page *page)
>  }
>  
>  /*
> + * move @page to inactive LRU's tail so that VM can discard it
> + * rather than swapping hot pages out when memory pressure happens.
> + */
> +static bool move_lazyfree(struct page *page)
> +{
> +	if (!trylock_page(page))
> +		return false;
> +
> +	if (PageKsm(page)) {
> +		unlock_page(page);
> +		return false;
> +	}
> +
> +	if (PageSwapCache(page) &&
> +			try_to_free_swap(page))
> +		ClearPageDirty(page);
> +
> +	if (!PageLazyFree(page)) {
> +		SetPageLazyFree(page);
> +		deactivate_page(page);
> +	}
> +
> +	unlock_page(page);
> +	return true;
> +}
> +
> +/*
>   * Passed an array of pages, drop them all from swapcache and then release
>   * them.  They are removed from the LRU and freed if this is their last use.
> + * If page passed are lazyfree, deactivate them intead of freeing.
>   */
>  void free_pages_and_swap_cache(struct page **pages, int nr)
>  {
> @@ -269,7 +298,14 @@ void free_pages_and_swap_cache(struct page **pages, int nr)
>  		int i;
>  
>  		for (i = 0; i < todo; i++)
> -			free_swap_cache(pagep[i]);
> +			if (LazyFree(pagep[i])) {
> +				pagep[i] = ClearLazyFree(pagep[i]);
> +				/* If we failed, just free */
> +				if (!move_lazyfree(pagep[i]))
> +					free_swap_cache(pagep[i]);

Oops, patchset was confused by older version in my git tree.
Fix goes.


diff --git a/mm/swap_state.c b/mm/swap_state.c
index 0718ecd166dc..882f1c8e5bd2 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -300,9 +300,7 @@ void free_pages_and_swap_cache(struct page **pages, int nr)
 		for (i = 0; i < todo; i++)
 			if (LazyFree(pagep[i])) {
 				pagep[i] = ClearLazyFree(pagep[i]);
-				/* If we failed, just free */
-				if (!move_lazyfree(pagep[i]))
-					free_swap_cache(pagep[i]);
+				move_lazyfree(pagep[i]);
 			} else {
 				free_swap_cache(pagep[i]);
 			}

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
  2014-03-14  7:37   ` Zhang Yanfei
@ 2014-03-14  7:56     ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  7:56 UTC (permalink / raw)
  To: Zhang Yanfei
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

Hello Zhang,

On Fri, Mar 14, 2014 at 03:37:28PM +0800, Zhang Yanfei wrote:
> Hello Minchan
> 
> On 03/14/2014 02:37 PM, Minchan Kim wrote:
> > This patch is an attempt to support MADV_FREE for Linux.
> > 
> > Rationale is following as.
> > 
> > Allocators call munmap(2) when user call free(3) if ptr is
> > in mmaped area. But munmap isn't cheap because it have to clean up
> > all pte entries, unlinking a vma and returns free pages to buddy
> > so overhead would be increased linearly by mmaped area's size.
> > So they like madvise_dontneed rather than munmap.
> > 
> > "dontneed" holds read-side lock of mmap_sem so other threads
> > of the process could go with concurrent page faults so it is
> > better than munmap if it's not lack of address space.
> > But the problem is that most of allocator reuses that address
> > space soonish so applications see page fault, page allocation,
> > page zeroing if allocator already called madvise_dontneed
> > on the address space.
> > 
> > For avoidng that overheads, other OS have supported MADV_FREE.
> > The idea is just mark pages as lazyfree when madvise called
> > and purge them if memory pressure happens. Otherwise, VM doesn't
> > detach pages on the address space so application could use
> > that memory space without above overheads.
> 
> I didn't look into the code. Does this mean we just keep the vma,
> the pte entries, and page itself for later possible reuse? If so,

Just clear pte access bit and dirty bit so the VM could notice
that user made page dirty since it called madvise(MADV_FREE).
If then, VM couldn't purge the page. Otherwise, VM could purge
the page instead of swapping and later, user could see the zeroed
pages.

> how can we reuse the vma? The kernel would mark the vma kinds of
> special so that it can be reused other than unmapped? Do you have

I don't get it. Could you elaborate it a bit?

> an example about this reuse?

As I said, jemalloc and tcmalloc have supported it for other OS.

> 
> Another thing is when I search MADV_FREE in the internet, I see that
> Rik posted the similar patch in 2007 but that patch didn't
> go into the upstream kernel.  And some explanation from Andrew:
> 
> ------------------------------------------------------
>  lazy-freeing-of-memory-through-madv_free.patch
> 
>  lazy-freeing-of-memory-through-madv_free-vs-mm-madvise-avoid-exclusive-mmap_sem.patch
> 
>  restore-madv_dontneed-to-its-original-linux-behaviour.patch
> 
> 
> 
> I think the MADV_FREE changes need more work:
> 
> 
> 
> We need crystal-clear statements regarding the present functionality, the new
> 
> functionality and how these relate to the spec and to implmentations in other
> 
> OS'es.  Once we have that info we are in a position to work out whether the
> 
> code can be merged as-is, or if additional changes are needed.
> 
> 
> 
> Because right now, I don't know where we are with respect to these things and
> 
> I doubt if many of our users know either.  How can Michael write a manpage for
> 
> this is we don't tell him what it all does?
> ------------------------------------------------------

True. I need more documentation and will do it if everybody agree on
this new feature.

Thanks.

> 
> Thanks
> Zhang Yanfei
> 
> > 
> > I tweaked jamalloc to use MADV_FREE for the testing.
> > 
> > diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
> > index 8a42e75..20e31af 100644
> > --- a/src/chunk_mmap.c
> > +++ b/src/chunk_mmap.c
> > @@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
> >  #  else
> >  #    error "No method defined for purging unused dirty pages."
> >  #  endif
> > -       int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
> > +       int err = madvise(addr, length, 5);
> >         unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
> >  #  undef JEMALLOC_MADV_PURGE
> >  #  undef JEMALLOC_MADV_ZEROS
> > 
> > 
> > RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)
> > 
> > (1.1) stands for 1 process and 1 thread so for exmaple,
> > (1.4) is 1 process and 4 thread.
> > 
> > vanilla jemalloc	 patched jemalloc
> > 
> > 1.1       1.1
> > records:  5              records:  5
> > avg:      7404.60        avg:      14059.80
> > std:      116.67(1.58%)  std:      93.92(0.67%)
> > max:      7564.00        max:      14152.00
> > min:      7288.00        min:      13893.00
> > 1.4       1.4
> > records:  5              records:  5
> > avg:      16160.80       avg:      30173.00
> > std:      509.80(3.15%)  std:      3050.72(10.11%)
> > max:      16728.00       max:      33989.00
> > min:      15216.00       min:      25173.00
> > 1.8       1.8
> > records:  5              records:  5
> > avg:      16003.00       avg:      30080.20
> > std:      290.40(1.81%)  std:      2063.57(6.86%)
> > max:      16537.00       max:      32735.00
> > min:      15727.00       min:      27381.00
> > 4.1       4.1
> > records:  5              records:  5
> > avg:      4003.60        avg:      8064.80
> > std:      65.33(1.63%)   std:      143.89(1.78%)
> > max:      4118.00        max:      8319.00
> > min:      3921.00        min:      7888.00
> > 4.4       4.4
> > records:  5              records:  5
> > avg:      3907.40        avg:      7199.80
> > std:      48.68(1.25%)   std:      80.21(1.11%)
> > max:      3997.00        max:      7320.00
> > min:      3863.00        min:      7113.00
> > 4.8       4.8
> > records:  5              records:  5
> > avg:      3893.00        avg:      7195.20
> > std:      19.11(0.49%)   std:      101.55(1.41%)
> > max:      3927.00        max:      7309.00
> > min:      3869.00        min:      7012.00
> > 8.1       8.1
> > records:  5              records:  5
> > avg:      1942.00        avg:      3602.80
> > std:      34.60(1.78%)   std:      22.97(0.64%)
> > max:      2010.00        max:      3632.00
> > min:      1913.00        min:      3563.00
> > 8.4       8.4
> > records:  5              records:  5
> > avg:      1938.00        avg:      3405.60
> > std:      32.77(1.69%)   std:      36.25(1.06%)
> > max:      1998.00        max:      3468.00
> > min:      1905.00        min:      3374.00
> > 8.8       8.8
> > records:  5              records:  5
> > avg:      1977.80        avg:      3434.20
> > std:      25.75(1.30%)   std:      57.95(1.69%)
> > max:      2011.00        max:      3533.00
> > min:      1937.00        min:      3363.00
> > 
> > So, MADV_FREE is 2 time faster than MADV_DONTNEED for
> > every cases.
> > 
> > I didn't test a lot but it's enough to show the concept and
> > direction before LSF/MM.
> > 
> > Patchset is based on 3.14-rc6.
> > 
> > Welcome any comment!
> > 
> > Minchan Kim (6):
> >   mm: clean up PAGE_MAPPING_FLAGS
> >   mm: work deactivate_page with anon pages
> >   mm: support madvise(MADV_FREE)
> >   mm: add stat about lazyfree pages
> >   mm: reclaim lazyfree pages in swapless system
> >   mm: ksm: don't merge lazyfree page
> > 
> >  include/asm-generic/tlb.h              |  9 ++++++++
> >  include/linux/mm.h                     | 39 +++++++++++++++++++++++++++++++++-
> >  include/linux/mm_inline.h              |  9 ++++++++
> >  include/linux/mmzone.h                 |  1 +
> >  include/linux/rmap.h                   |  1 +
> >  include/linux/swap.h                   | 15 +++++++++++++
> >  include/linux/vm_event_item.h          |  1 +
> >  include/uapi/asm-generic/mman-common.h |  1 +
> >  mm/ksm.c                               | 18 +++++++++++-----
> >  mm/madvise.c                           | 17 +++++++++++++--
> >  mm/memory.c                            | 12 ++++++++++-
> >  mm/page_alloc.c                        |  5 ++++-
> >  mm/rmap.c                              | 25 ++++++++++++++++++----
> >  mm/swap.c                              | 20 ++++++++---------
> >  mm/swap_state.c                        | 38 ++++++++++++++++++++++++++++++++-
> >  mm/vmscan.c                            | 32 +++++++++++++++++++++++++---
> >  mm/vmstat.c                            |  2 ++
> >  17 files changed, 217 insertions(+), 28 deletions(-)
> > 
> 
> 
> -- 
> Thanks.
> Zhang Yanfei
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
@ 2014-03-14  7:56     ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14  7:56 UTC (permalink / raw)
  To: Zhang Yanfei
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

Hello Zhang,

On Fri, Mar 14, 2014 at 03:37:28PM +0800, Zhang Yanfei wrote:
> Hello Minchan
> 
> On 03/14/2014 02:37 PM, Minchan Kim wrote:
> > This patch is an attempt to support MADV_FREE for Linux.
> > 
> > Rationale is following as.
> > 
> > Allocators call munmap(2) when user call free(3) if ptr is
> > in mmaped area. But munmap isn't cheap because it have to clean up
> > all pte entries, unlinking a vma and returns free pages to buddy
> > so overhead would be increased linearly by mmaped area's size.
> > So they like madvise_dontneed rather than munmap.
> > 
> > "dontneed" holds read-side lock of mmap_sem so other threads
> > of the process could go with concurrent page faults so it is
> > better than munmap if it's not lack of address space.
> > But the problem is that most of allocator reuses that address
> > space soonish so applications see page fault, page allocation,
> > page zeroing if allocator already called madvise_dontneed
> > on the address space.
> > 
> > For avoidng that overheads, other OS have supported MADV_FREE.
> > The idea is just mark pages as lazyfree when madvise called
> > and purge them if memory pressure happens. Otherwise, VM doesn't
> > detach pages on the address space so application could use
> > that memory space without above overheads.
> 
> I didn't look into the code. Does this mean we just keep the vma,
> the pte entries, and page itself for later possible reuse? If so,

Just clear pte access bit and dirty bit so the VM could notice
that user made page dirty since it called madvise(MADV_FREE).
If then, VM couldn't purge the page. Otherwise, VM could purge
the page instead of swapping and later, user could see the zeroed
pages.

> how can we reuse the vma? The kernel would mark the vma kinds of
> special so that it can be reused other than unmapped? Do you have

I don't get it. Could you elaborate it a bit?

> an example about this reuse?

As I said, jemalloc and tcmalloc have supported it for other OS.

> 
> Another thing is when I search MADV_FREE in the internet, I see that
> Rik posted the similar patch in 2007 but that patch didn't
> go into the upstream kernel.  And some explanation from Andrew:
> 
> ------------------------------------------------------
>  lazy-freeing-of-memory-through-madv_free.patch
> 
>  lazy-freeing-of-memory-through-madv_free-vs-mm-madvise-avoid-exclusive-mmap_sem.patch
> 
>  restore-madv_dontneed-to-its-original-linux-behaviour.patch
> 
> 
> 
> I think the MADV_FREE changes need more work:
> 
> 
> 
> We need crystal-clear statements regarding the present functionality, the new
> 
> functionality and how these relate to the spec and to implmentations in other
> 
> OS'es.  Once we have that info we are in a position to work out whether the
> 
> code can be merged as-is, or if additional changes are needed.
> 
> 
> 
> Because right now, I don't know where we are with respect to these things and
> 
> I doubt if many of our users know either.  How can Michael write a manpage for
> 
> this is we don't tell him what it all does?
> ------------------------------------------------------

True. I need more documentation and will do it if everybody agree on
this new feature.

Thanks.

> 
> Thanks
> Zhang Yanfei
> 
> > 
> > I tweaked jamalloc to use MADV_FREE for the testing.
> > 
> > diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
> > index 8a42e75..20e31af 100644
> > --- a/src/chunk_mmap.c
> > +++ b/src/chunk_mmap.c
> > @@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
> >  #  else
> >  #    error "No method defined for purging unused dirty pages."
> >  #  endif
> > -       int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
> > +       int err = madvise(addr, length, 5);
> >         unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
> >  #  undef JEMALLOC_MADV_PURGE
> >  #  undef JEMALLOC_MADV_ZEROS
> > 
> > 
> > RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)
> > 
> > (1.1) stands for 1 process and 1 thread so for exmaple,
> > (1.4) is 1 process and 4 thread.
> > 
> > vanilla jemalloc	 patched jemalloc
> > 
> > 1.1       1.1
> > records:  5              records:  5
> > avg:      7404.60        avg:      14059.80
> > std:      116.67(1.58%)  std:      93.92(0.67%)
> > max:      7564.00        max:      14152.00
> > min:      7288.00        min:      13893.00
> > 1.4       1.4
> > records:  5              records:  5
> > avg:      16160.80       avg:      30173.00
> > std:      509.80(3.15%)  std:      3050.72(10.11%)
> > max:      16728.00       max:      33989.00
> > min:      15216.00       min:      25173.00
> > 1.8       1.8
> > records:  5              records:  5
> > avg:      16003.00       avg:      30080.20
> > std:      290.40(1.81%)  std:      2063.57(6.86%)
> > max:      16537.00       max:      32735.00
> > min:      15727.00       min:      27381.00
> > 4.1       4.1
> > records:  5              records:  5
> > avg:      4003.60        avg:      8064.80
> > std:      65.33(1.63%)   std:      143.89(1.78%)
> > max:      4118.00        max:      8319.00
> > min:      3921.00        min:      7888.00
> > 4.4       4.4
> > records:  5              records:  5
> > avg:      3907.40        avg:      7199.80
> > std:      48.68(1.25%)   std:      80.21(1.11%)
> > max:      3997.00        max:      7320.00
> > min:      3863.00        min:      7113.00
> > 4.8       4.8
> > records:  5              records:  5
> > avg:      3893.00        avg:      7195.20
> > std:      19.11(0.49%)   std:      101.55(1.41%)
> > max:      3927.00        max:      7309.00
> > min:      3869.00        min:      7012.00
> > 8.1       8.1
> > records:  5              records:  5
> > avg:      1942.00        avg:      3602.80
> > std:      34.60(1.78%)   std:      22.97(0.64%)
> > max:      2010.00        max:      3632.00
> > min:      1913.00        min:      3563.00
> > 8.4       8.4
> > records:  5              records:  5
> > avg:      1938.00        avg:      3405.60
> > std:      32.77(1.69%)   std:      36.25(1.06%)
> > max:      1998.00        max:      3468.00
> > min:      1905.00        min:      3374.00
> > 8.8       8.8
> > records:  5              records:  5
> > avg:      1977.80        avg:      3434.20
> > std:      25.75(1.30%)   std:      57.95(1.69%)
> > max:      2011.00        max:      3533.00
> > min:      1937.00        min:      3363.00
> > 
> > So, MADV_FREE is 2 time faster than MADV_DONTNEED for
> > every cases.
> > 
> > I didn't test a lot but it's enough to show the concept and
> > direction before LSF/MM.
> > 
> > Patchset is based on 3.14-rc6.
> > 
> > Welcome any comment!
> > 
> > Minchan Kim (6):
> >   mm: clean up PAGE_MAPPING_FLAGS
> >   mm: work deactivate_page with anon pages
> >   mm: support madvise(MADV_FREE)
> >   mm: add stat about lazyfree pages
> >   mm: reclaim lazyfree pages in swapless system
> >   mm: ksm: don't merge lazyfree page
> > 
> >  include/asm-generic/tlb.h              |  9 ++++++++
> >  include/linux/mm.h                     | 39 +++++++++++++++++++++++++++++++++-
> >  include/linux/mm_inline.h              |  9 ++++++++
> >  include/linux/mmzone.h                 |  1 +
> >  include/linux/rmap.h                   |  1 +
> >  include/linux/swap.h                   | 15 +++++++++++++
> >  include/linux/vm_event_item.h          |  1 +
> >  include/uapi/asm-generic/mman-common.h |  1 +
> >  mm/ksm.c                               | 18 +++++++++++-----
> >  mm/madvise.c                           | 17 +++++++++++++--
> >  mm/memory.c                            | 12 ++++++++++-
> >  mm/page_alloc.c                        |  5 ++++-
> >  mm/rmap.c                              | 25 ++++++++++++++++++----
> >  mm/swap.c                              | 20 ++++++++---------
> >  mm/swap_state.c                        | 38 ++++++++++++++++++++++++++++++++-
> >  mm/vmscan.c                            | 32 +++++++++++++++++++++++++---
> >  mm/vmstat.c                            |  2 ++
> >  17 files changed, 217 insertions(+), 28 deletions(-)
> > 
> 
> 
> -- 
> Thanks.
> Zhang Yanfei
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 3/6] mm: support madvise(MADV_FREE)
  2014-03-14  6:37   ` Minchan Kim
@ 2014-03-14 13:33     ` Kirill A. Shutemov
  -1 siblings, 0 replies; 35+ messages in thread
From: Kirill A. Shutemov @ 2014-03-14 13:33 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

On Fri, Mar 14, 2014 at 03:37:47PM +0900, Minchan Kim wrote:
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c1b7414c7bef..9b048cabce27 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -933,10 +933,16 @@ void page_address_init(void);
>   * Please note that, confusingly, "page_mapping" refers to the inode
>   * address_space which maps the page from disk; whereas "page_mapped"
>   * refers to user virtual address space into which the page is mapped.
> + *
> + * PAGE_MAPPING_LZFREE bit is set along with PAGE_MAPPING_ANON bit
> + * and then page->mapping points to an anon_vma. This flag is used
> + * for lazy freeing the page instead of swap.
>   */
>  #define PAGE_MAPPING_ANON	1
>  #define PAGE_MAPPING_KSM	2
> -#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
> +#define PAGE_MAPPING_LZFREE	4
> +#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM | \
> +				 PAGE_MAPPING_LZFREE)

Is it safe to use third bit in pointer everywhere?

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 3/6] mm: support madvise(MADV_FREE)
@ 2014-03-14 13:33     ` Kirill A. Shutemov
  0 siblings, 0 replies; 35+ messages in thread
From: Kirill A. Shutemov @ 2014-03-14 13:33 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

On Fri, Mar 14, 2014 at 03:37:47PM +0900, Minchan Kim wrote:
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c1b7414c7bef..9b048cabce27 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -933,10 +933,16 @@ void page_address_init(void);
>   * Please note that, confusingly, "page_mapping" refers to the inode
>   * address_space which maps the page from disk; whereas "page_mapped"
>   * refers to user virtual address space into which the page is mapped.
> + *
> + * PAGE_MAPPING_LZFREE bit is set along with PAGE_MAPPING_ANON bit
> + * and then page->mapping points to an anon_vma. This flag is used
> + * for lazy freeing the page instead of swap.
>   */
>  #define PAGE_MAPPING_ANON	1
>  #define PAGE_MAPPING_KSM	2
> -#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
> +#define PAGE_MAPPING_LZFREE	4
> +#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM | \
> +				 PAGE_MAPPING_LZFREE)

Is it safe to use third bit in pointer everywhere?

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 3/6] mm: support madvise(MADV_FREE)
  2014-03-14 13:33     ` Kirill A. Shutemov
@ 2014-03-14 15:24       ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14 15:24 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

On Fri, Mar 14, 2014 at 03:33:11PM +0200, Kirill A. Shutemov wrote:
> On Fri, Mar 14, 2014 at 03:37:47PM +0900, Minchan Kim wrote:
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index c1b7414c7bef..9b048cabce27 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -933,10 +933,16 @@ void page_address_init(void);
> >   * Please note that, confusingly, "page_mapping" refers to the inode
> >   * address_space which maps the page from disk; whereas "page_mapped"
> >   * refers to user virtual address space into which the page is mapped.
> > + *
> > + * PAGE_MAPPING_LZFREE bit is set along with PAGE_MAPPING_ANON bit
> > + * and then page->mapping points to an anon_vma. This flag is used
> > + * for lazy freeing the page instead of swap.
> >   */
> >  #define PAGE_MAPPING_ANON	1
> >  #define PAGE_MAPPING_KSM	2
> > -#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
> > +#define PAGE_MAPPING_LZFREE	4
> > +#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM | \
> > +				 PAGE_MAPPING_LZFREE)
> 
> Is it safe to use third bit in pointer everywhere?

I overlooked ARCH_SLAB_MINALIGN which is 8 byte for most arch but
surely some of arch would have less than it(ex, 4 byte).

Alternative is PG_private or PG_private2. That flags is used for
file pages mostly while zsmalloc uses it but it should not LRU page.
Other thing in mm/ is memory-hotplug but I guess we don't need to set
PG_private because I couldn't find PagePrivate for that.

So, I think using that flag for anon page has no problem if we tweak
page_has_private works with only !PageAnon.

> 
> -- 
>  Kirill A. Shutemov
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 3/6] mm: support madvise(MADV_FREE)
@ 2014-03-14 15:24       ` Minchan Kim
  0 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-14 15:24 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

On Fri, Mar 14, 2014 at 03:33:11PM +0200, Kirill A. Shutemov wrote:
> On Fri, Mar 14, 2014 at 03:37:47PM +0900, Minchan Kim wrote:
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index c1b7414c7bef..9b048cabce27 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -933,10 +933,16 @@ void page_address_init(void);
> >   * Please note that, confusingly, "page_mapping" refers to the inode
> >   * address_space which maps the page from disk; whereas "page_mapped"
> >   * refers to user virtual address space into which the page is mapped.
> > + *
> > + * PAGE_MAPPING_LZFREE bit is set along with PAGE_MAPPING_ANON bit
> > + * and then page->mapping points to an anon_vma. This flag is used
> > + * for lazy freeing the page instead of swap.
> >   */
> >  #define PAGE_MAPPING_ANON	1
> >  #define PAGE_MAPPING_KSM	2
> > -#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
> > +#define PAGE_MAPPING_LZFREE	4
> > +#define PAGE_MAPPING_FLAGS	(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM | \
> > +				 PAGE_MAPPING_LZFREE)
> 
> Is it safe to use third bit in pointer everywhere?

I overlooked ARCH_SLAB_MINALIGN which is 8 byte for most arch but
surely some of arch would have less than it(ex, 4 byte).

Alternative is PG_private or PG_private2. That flags is used for
file pages mostly while zsmalloc uses it but it should not LRU page.
Other thing in mm/ is memory-hotplug but I guess we don't need to set
PG_private because I couldn't find PagePrivate for that.

So, I think using that flag for anon page has no problem if we tweak
page_has_private works with only !PageAnon.

> 
> -- 
>  Kirill A. Shutemov
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
  2014-03-14  6:37 ` Minchan Kim
@ 2014-03-18 17:55   ` Andy Lutomirski
  -1 siblings, 0 replies; 35+ messages in thread
From: Andy Lutomirski @ 2014-03-18 17:55 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans

On 03/13/2014 11:37 PM, Minchan Kim wrote:
> This patch is an attempt to support MADV_FREE for Linux.
> 
> Rationale is following as.
> 
> Allocators call munmap(2) when user call free(3) if ptr is
> in mmaped area. But munmap isn't cheap because it have to clean up
> all pte entries, unlinking a vma and returns free pages to buddy
> so overhead would be increased linearly by mmaped area's size.
> So they like madvise_dontneed rather than munmap.
> 
> "dontneed" holds read-side lock of mmap_sem so other threads
> of the process could go with concurrent page faults so it is
> better than munmap if it's not lack of address space.
> But the problem is that most of allocator reuses that address
> space soonish so applications see page fault, page allocation,
> page zeroing if allocator already called madvise_dontneed
> on the address space.
> 
> For avoidng that overheads, other OS have supported MADV_FREE.
> The idea is just mark pages as lazyfree when madvise called
> and purge them if memory pressure happens. Otherwise, VM doesn't
> detach pages on the address space so application could use
> that memory space without above overheads.

I must be missing something.

If the application issues MADV_FREE and then writes to the MADV_FREEd
range, the kernel needs to know that the pages are no longer safe to
lazily free.  This would presumably happen via a page fault on write.
For that to happen reliably, the kernel has to write protect the pages
when MADV_FREE is called, which in turn requires flushing the TLBs.

How does this end up being faster than munmap?

--Andy

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
@ 2014-03-18 17:55   ` Andy Lutomirski
  0 siblings, 0 replies; 35+ messages in thread
From: Andy Lutomirski @ 2014-03-18 17:55 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Rik van Riel, Mel Gorman, Hugh Dickins, Dave Hansen,
	Johannes Weiner, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans

On 03/13/2014 11:37 PM, Minchan Kim wrote:
> This patch is an attempt to support MADV_FREE for Linux.
> 
> Rationale is following as.
> 
> Allocators call munmap(2) when user call free(3) if ptr is
> in mmaped area. But munmap isn't cheap because it have to clean up
> all pte entries, unlinking a vma and returns free pages to buddy
> so overhead would be increased linearly by mmaped area's size.
> So they like madvise_dontneed rather than munmap.
> 
> "dontneed" holds read-side lock of mmap_sem so other threads
> of the process could go with concurrent page faults so it is
> better than munmap if it's not lack of address space.
> But the problem is that most of allocator reuses that address
> space soonish so applications see page fault, page allocation,
> page zeroing if allocator already called madvise_dontneed
> on the address space.
> 
> For avoidng that overheads, other OS have supported MADV_FREE.
> The idea is just mark pages as lazyfree when madvise called
> and purge them if memory pressure happens. Otherwise, VM doesn't
> detach pages on the address space so application could use
> that memory space without above overheads.

I must be missing something.

If the application issues MADV_FREE and then writes to the MADV_FREEd
range, the kernel needs to know that the pages are no longer safe to
lazily free.  This would presumably happen via a page fault on write.
For that to happen reliably, the kernel has to write protect the pages
when MADV_FREE is called, which in turn requires flushing the TLBs.

How does this end up being faster than munmap?

--Andy

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 3/6] mm: support madvise(MADV_FREE)
  2014-03-14  6:37   ` Minchan Kim
@ 2014-03-18 18:26     ` Johannes Weiner
  -1 siblings, 0 replies; 35+ messages in thread
From: Johannes Weiner @ 2014-03-18 18:26 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans

On Fri, Mar 14, 2014 at 03:37:47PM +0900, Minchan Kim wrote:
> Linux doesn't have an ability to free pages lazy while other OS
> already have been supported that named by madvise(MADV_FREE).
> 
> The gain is clear that kernel can evict freed pages rather than
> swapping out or OOM if memory pressure happens.
> 
> Without memory pressure, freed pages would be reused by userspace
> without another additional overhead(ex, page fault + + page allocation
> + page zeroing).
> 
> Firstly, heavy users would be general allocators(ex, jemalloc,
> I hope ptmalloc support it) and jemalloc already have supported
> the feature for other OS(ex, FreeBSD)
> 
> At the moment, this patch would break build other ARCHs which have
> own TLB flush scheme other than that x86 but if there is no objection
> in this direction, I will add patches for handling other ARCHs
> in next iteration.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

> @@ -284,8 +286,17 @@ static long madvise_dontneed(struct vm_area_struct *vma,
>  			.last_index = ULONG_MAX,
>  		};
>  		zap_page_range(vma, start, end - start, &details);
> +	} else if (behavior == MADV_FREE) {
> +		struct zap_details details = {
> +			.lazy_free = 1,
> +		};
> +
> +		if (vma->vm_file)
> +			return -EINVAL;
> +		zap_page_range(vma, start, end - start, &details);

Wouldn't a custom page table walker to clear dirty bits and move pages
be better?  It's awkward to hook this into the freeing code and then
special case the pages and not actually free them.

> @@ -817,6 +817,25 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  
>  		sc->nr_scanned++;
>  
> +		if (PageLazyFree(page)) {
> +			switch (try_to_unmap(page, ttu_flags)) {

I don't get why we need a page flag for this.  page_check_references()
could use the rmap walk to also check if any pte/pmd is dirty.  If so,
you have to swap the page.  If all are clean, it can be discarded.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 3/6] mm: support madvise(MADV_FREE)
@ 2014-03-18 18:26     ` Johannes Weiner
  0 siblings, 0 replies; 35+ messages in thread
From: Johannes Weiner @ 2014-03-18 18:26 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans

On Fri, Mar 14, 2014 at 03:37:47PM +0900, Minchan Kim wrote:
> Linux doesn't have an ability to free pages lazy while other OS
> already have been supported that named by madvise(MADV_FREE).
> 
> The gain is clear that kernel can evict freed pages rather than
> swapping out or OOM if memory pressure happens.
> 
> Without memory pressure, freed pages would be reused by userspace
> without another additional overhead(ex, page fault + + page allocation
> + page zeroing).
> 
> Firstly, heavy users would be general allocators(ex, jemalloc,
> I hope ptmalloc support it) and jemalloc already have supported
> the feature for other OS(ex, FreeBSD)
> 
> At the moment, this patch would break build other ARCHs which have
> own TLB flush scheme other than that x86 but if there is no objection
> in this direction, I will add patches for handling other ARCHs
> in next iteration.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

> @@ -284,8 +286,17 @@ static long madvise_dontneed(struct vm_area_struct *vma,
>  			.last_index = ULONG_MAX,
>  		};
>  		zap_page_range(vma, start, end - start, &details);
> +	} else if (behavior == MADV_FREE) {
> +		struct zap_details details = {
> +			.lazy_free = 1,
> +		};
> +
> +		if (vma->vm_file)
> +			return -EINVAL;
> +		zap_page_range(vma, start, end - start, &details);

Wouldn't a custom page table walker to clear dirty bits and move pages
be better?  It's awkward to hook this into the freeing code and then
special case the pages and not actually free them.

> @@ -817,6 +817,25 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  
>  		sc->nr_scanned++;
>  
> +		if (PageLazyFree(page)) {
> +			switch (try_to_unmap(page, ttu_flags)) {

I don't get why we need a page flag for this.  page_check_references()
could use the rmap walk to also check if any pte/pmd is dirty.  If so,
you have to swap the page.  If all are clean, it can be discarded.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
  2014-03-18 17:55   ` Andy Lutomirski
  (?)
@ 2014-03-19  0:18   ` Minchan Kim
  2014-03-19  0:23       ` Andy Lutomirski
  -1 siblings, 1 reply; 35+ messages in thread
From: Minchan Kim @ 2014-03-19  0:18 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

Hello,

On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
> On 03/13/2014 11:37 PM, Minchan Kim wrote:
> > This patch is an attempt to support MADV_FREE for Linux.
> > 
> > Rationale is following as.
> > 
> > Allocators call munmap(2) when user call free(3) if ptr is
> > in mmaped area. But munmap isn't cheap because it have to clean up
> > all pte entries, unlinking a vma and returns free pages to buddy
> > so overhead would be increased linearly by mmaped area's size.
> > So they like madvise_dontneed rather than munmap.
> > 
> > "dontneed" holds read-side lock of mmap_sem so other threads
> > of the process could go with concurrent page faults so it is
> > better than munmap if it's not lack of address space.
> > But the problem is that most of allocator reuses that address
> > space soonish so applications see page fault, page allocation,
> > page zeroing if allocator already called madvise_dontneed
> > on the address space.
> > 
> > For avoidng that overheads, other OS have supported MADV_FREE.
> > The idea is just mark pages as lazyfree when madvise called
> > and purge them if memory pressure happens. Otherwise, VM doesn't
> > detach pages on the address space so application could use
> > that memory space without above overheads.
> 
> I must be missing something.
> 
> If the application issues MADV_FREE and then writes to the MADV_FREEd
> range, the kernel needs to know that the pages are no longer safe to
> lazily free.  This would presumably happen via a page fault on write.
> For that to happen reliably, the kernel has to write protect the pages
> when MADV_FREE is called, which in turn requires flushing the TLBs.

It could be done by pte_dirty bit check. Of course, if some architectures
don't support it by H/W, pte_mkdirty would make it CoW as you said.
> 
> How does this end up being faster than munmap?

MADV_FREE doesn't need to return back the pages into page allocator
compared to MADV_DONTNEED and the overhead is not small when I measured
that on my machine.(Roughly, MADV_FREE's cost is half of DONTNEED through
avoiding involving page allocator.)

But I'd like to clarify that it's not MADV_FREE's goal that syscall
itself should be faster than MADV_DONTNEED but major goal is to
avoid unnecessary page fault + page allocation + page zeroing +
garbage swapout.

> 
> --Andy
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
  2014-03-19  0:18   ` Minchan Kim
@ 2014-03-19  0:23       ` Andy Lutomirski
  0 siblings, 0 replies; 35+ messages in thread
From: Andy Lutomirski @ 2014-03-19  0:23 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

On Tue, Mar 18, 2014 at 5:18 PM, Minchan Kim <minchan@kernel.org> wrote:
> Hello,
>
> On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
>> On 03/13/2014 11:37 PM, Minchan Kim wrote:
>> > This patch is an attempt to support MADV_FREE for Linux.
>> >
>> > Rationale is following as.
>> >
>> > Allocators call munmap(2) when user call free(3) if ptr is
>> > in mmaped area. But munmap isn't cheap because it have to clean up
>> > all pte entries, unlinking a vma and returns free pages to buddy
>> > so overhead would be increased linearly by mmaped area's size.
>> > So they like madvise_dontneed rather than munmap.
>> >
>> > "dontneed" holds read-side lock of mmap_sem so other threads
>> > of the process could go with concurrent page faults so it is
>> > better than munmap if it's not lack of address space.
>> > But the problem is that most of allocator reuses that address
>> > space soonish so applications see page fault, page allocation,
>> > page zeroing if allocator already called madvise_dontneed
>> > on the address space.
>> >
>> > For avoidng that overheads, other OS have supported MADV_FREE.
>> > The idea is just mark pages as lazyfree when madvise called
>> > and purge them if memory pressure happens. Otherwise, VM doesn't
>> > detach pages on the address space so application could use
>> > that memory space without above overheads.
>>
>> I must be missing something.
>>
>> If the application issues MADV_FREE and then writes to the MADV_FREEd
>> range, the kernel needs to know that the pages are no longer safe to
>> lazily free.  This would presumably happen via a page fault on write.
>> For that to happen reliably, the kernel has to write protect the pages
>> when MADV_FREE is called, which in turn requires flushing the TLBs.
>
> It could be done by pte_dirty bit check. Of course, if some architectures
> don't support it by H/W, pte_mkdirty would make it CoW as you said.

If the page already has dirty PTEs, then you need to clear the dirty
bits and flush TLBs so that other CPUs notice that the PTEs are clean,
I think.

Also, this has very odd semantics wrt reading the page after MADV_FREE
-- is reading the page guaranteed to un-free it?

>>
>> How does this end up being faster than munmap?
>
> MADV_FREE doesn't need to return back the pages into page allocator
> compared to MADV_DONTNEED and the overhead is not small when I measured
> that on my machine.(Roughly, MADV_FREE's cost is half of DONTNEED through
> avoiding involving page allocator.)
>
> But I'd like to clarify that it's not MADV_FREE's goal that syscall
> itself should be faster than MADV_DONTNEED but major goal is to
> avoid unnecessary page fault + page allocation + page zeroing +
> garbage swapout.

This sounds like it might be better solved by trying to make munmap or
MADV_DONTNEED faster.  Maybe those functions should lazily give pages
back to the buddy allocator.

--Andy

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
@ 2014-03-19  0:23       ` Andy Lutomirski
  0 siblings, 0 replies; 35+ messages in thread
From: Andy Lutomirski @ 2014-03-19  0:23 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

On Tue, Mar 18, 2014 at 5:18 PM, Minchan Kim <minchan@kernel.org> wrote:
> Hello,
>
> On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
>> On 03/13/2014 11:37 PM, Minchan Kim wrote:
>> > This patch is an attempt to support MADV_FREE for Linux.
>> >
>> > Rationale is following as.
>> >
>> > Allocators call munmap(2) when user call free(3) if ptr is
>> > in mmaped area. But munmap isn't cheap because it have to clean up
>> > all pte entries, unlinking a vma and returns free pages to buddy
>> > so overhead would be increased linearly by mmaped area's size.
>> > So they like madvise_dontneed rather than munmap.
>> >
>> > "dontneed" holds read-side lock of mmap_sem so other threads
>> > of the process could go with concurrent page faults so it is
>> > better than munmap if it's not lack of address space.
>> > But the problem is that most of allocator reuses that address
>> > space soonish so applications see page fault, page allocation,
>> > page zeroing if allocator already called madvise_dontneed
>> > on the address space.
>> >
>> > For avoidng that overheads, other OS have supported MADV_FREE.
>> > The idea is just mark pages as lazyfree when madvise called
>> > and purge them if memory pressure happens. Otherwise, VM doesn't
>> > detach pages on the address space so application could use
>> > that memory space without above overheads.
>>
>> I must be missing something.
>>
>> If the application issues MADV_FREE and then writes to the MADV_FREEd
>> range, the kernel needs to know that the pages are no longer safe to
>> lazily free.  This would presumably happen via a page fault on write.
>> For that to happen reliably, the kernel has to write protect the pages
>> when MADV_FREE is called, which in turn requires flushing the TLBs.
>
> It could be done by pte_dirty bit check. Of course, if some architectures
> don't support it by H/W, pte_mkdirty would make it CoW as you said.

If the page already has dirty PTEs, then you need to clear the dirty
bits and flush TLBs so that other CPUs notice that the PTEs are clean,
I think.

Also, this has very odd semantics wrt reading the page after MADV_FREE
-- is reading the page guaranteed to un-free it?

>>
>> How does this end up being faster than munmap?
>
> MADV_FREE doesn't need to return back the pages into page allocator
> compared to MADV_DONTNEED and the overhead is not small when I measured
> that on my machine.(Roughly, MADV_FREE's cost is half of DONTNEED through
> avoiding involving page allocator.)
>
> But I'd like to clarify that it's not MADV_FREE's goal that syscall
> itself should be faster than MADV_DONTNEED but major goal is to
> avoid unnecessary page fault + page allocation + page zeroing +
> garbage swapout.

This sounds like it might be better solved by trying to make munmap or
MADV_DONTNEED faster.  Maybe those functions should lazily give pages
back to the buddy allocator.

--Andy

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
  2014-03-19  0:23       ` Andy Lutomirski
  (?)
@ 2014-03-19  1:02       ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-19  1:02 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, Johannes Weiner, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

On Tue, Mar 18, 2014 at 05:23:37PM -0700, Andy Lutomirski wrote:
> On Tue, Mar 18, 2014 at 5:18 PM, Minchan Kim <minchan@kernel.org> wrote:
> > Hello,
> >
> > On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
> >> On 03/13/2014 11:37 PM, Minchan Kim wrote:
> >> > This patch is an attempt to support MADV_FREE for Linux.
> >> >
> >> > Rationale is following as.
> >> >
> >> > Allocators call munmap(2) when user call free(3) if ptr is
> >> > in mmaped area. But munmap isn't cheap because it have to clean up
> >> > all pte entries, unlinking a vma and returns free pages to buddy
> >> > so overhead would be increased linearly by mmaped area's size.
> >> > So they like madvise_dontneed rather than munmap.
> >> >
> >> > "dontneed" holds read-side lock of mmap_sem so other threads
> >> > of the process could go with concurrent page faults so it is
> >> > better than munmap if it's not lack of address space.
> >> > But the problem is that most of allocator reuses that address
> >> > space soonish so applications see page fault, page allocation,
> >> > page zeroing if allocator already called madvise_dontneed
> >> > on the address space.
> >> >
> >> > For avoidng that overheads, other OS have supported MADV_FREE.
> >> > The idea is just mark pages as lazyfree when madvise called
> >> > and purge them if memory pressure happens. Otherwise, VM doesn't
> >> > detach pages on the address space so application could use
> >> > that memory space without above overheads.
> >>
> >> I must be missing something.
> >>
> >> If the application issues MADV_FREE and then writes to the MADV_FREEd
> >> range, the kernel needs to know that the pages are no longer safe to
> >> lazily free.  This would presumably happen via a page fault on write.
> >> For that to happen reliably, the kernel has to write protect the pages
> >> when MADV_FREE is called, which in turn requires flushing the TLBs.
> >
> > It could be done by pte_dirty bit check. Of course, if some architectures
> > don't support it by H/W, pte_mkdirty would make it CoW as you said.
> 
> If the page already has dirty PTEs, then you need to clear the dirty
> bits and flush TLBs so that other CPUs notice that the PTEs are clean,
> I think.

True. I didn't mean we don't need TLB flush. Look at the code although
there are lots of bug in RFC v1.

> 
> Also, this has very odd semantics wrt reading the page after MADV_FREE
> -- is reading the page guaranteed to un-free it?

Yeb, I thought about that oddness but didn't make conclusion because
other OS seem to work like that.
http://www.freebsd.org/cgi/man.cgi?query=madvise&sektion=2

But we could fix it easily by checking access bit instead of dirty bit.

> 
> >>
> >> How does this end up being faster than munmap?
> >
> > MADV_FREE doesn't need to return back the pages into page allocator
> > compared to MADV_DONTNEED and the overhead is not small when I measured
> > that on my machine.(Roughly, MADV_FREE's cost is half of DONTNEED through
> > avoiding involving page allocator.)
> >
> > But I'd like to clarify that it's not MADV_FREE's goal that syscall
> > itself should be faster than MADV_DONTNEED but major goal is to
> > avoid unnecessary page fault + page allocation + page zeroing +
> > garbage swapout.
> 
> This sounds like it might be better solved by trying to make munmap or
> MADV_DONTNEED faster.  Maybe those functions should lazily give pages
> back to the buddy allocator.

About munmap, it needs write-mmap_sem and it hurts heavily of
allocator performance in multi-thread.

About MADV_DONTNEED, Rik van Riel tried to replace MADV_DONTNEED
with MADV_FREE in 2007(http://lwn.net/Articles/230799/).
But I don't know why it was dropped. One think I can imagine
is that it could make regression because user on MADV_DONTNEED
expect rss decreasing when syscall is called.

> 
> --Andy
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 3/6] mm: support madvise(MADV_FREE)
  2014-03-18 18:26     ` Johannes Weiner
  (?)
@ 2014-03-19  1:22     ` Minchan Kim
  -1 siblings, 0 replies; 35+ messages in thread
From: Minchan Kim @ 2014-03-19  1:22 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Rik van Riel, Mel Gorman, Hugh Dickins,
	Dave Hansen, KOSAKI Motohiro, linux-mm, linux-kernel,
	John Stultz, Jason Evans

Hello Hannes,

On Tue, Mar 18, 2014 at 02:26:21PM -0400, Johannes Weiner wrote:
> On Fri, Mar 14, 2014 at 03:37:47PM +0900, Minchan Kim wrote:
> > Linux doesn't have an ability to free pages lazy while other OS
> > already have been supported that named by madvise(MADV_FREE).
> > 
> > The gain is clear that kernel can evict freed pages rather than
> > swapping out or OOM if memory pressure happens.
> > 
> > Without memory pressure, freed pages would be reused by userspace
> > without another additional overhead(ex, page fault + + page allocation
> > + page zeroing).
> > 
> > Firstly, heavy users would be general allocators(ex, jemalloc,
> > I hope ptmalloc support it) and jemalloc already have supported
> > the feature for other OS(ex, FreeBSD)
> > 
> > At the moment, this patch would break build other ARCHs which have
> > own TLB flush scheme other than that x86 but if there is no objection
> > in this direction, I will add patches for handling other ARCHs
> > in next iteration.
> > 
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> > @@ -284,8 +286,17 @@ static long madvise_dontneed(struct vm_area_struct *vma,
> >  			.last_index = ULONG_MAX,
> >  		};
> >  		zap_page_range(vma, start, end - start, &details);
> > +	} else if (behavior == MADV_FREE) {
> > +		struct zap_details details = {
> > +			.lazy_free = 1,
> > +		};
> > +
> > +		if (vma->vm_file)
> > +			return -EINVAL;
> > +		zap_page_range(vma, start, end - start, &details);
> 
> Wouldn't a custom page table walker to clear dirty bits and move pages
> be better?  It's awkward to hook this into the freeing code and then
> special case the pages and not actually free them.

NP.

> 
> > @@ -817,6 +817,25 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> >  
> >  		sc->nr_scanned++;
> >  
> > +		if (PageLazyFree(page)) {
> > +			switch (try_to_unmap(page, ttu_flags)) {
> 
> I don't get why we need a page flag for this.  page_check_references()
> could use the rmap walk to also check if any pte/pmd is dirty.  If so,
> you have to swap the page.  If all are clean, it can be discarded.

Ugh, you're right. I guess it could work.
I will look into that in next iteration.

Thanks!

> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
  2014-03-19  0:23       ` Andy Lutomirski
@ 2014-03-19  5:15         ` Johannes Weiner
  -1 siblings, 0 replies; 35+ messages in thread
From: Johannes Weiner @ 2014-03-19  5:15 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Minchan Kim, Andrew Morton, Rik van Riel, Mel Gorman,
	Hugh Dickins, Dave Hansen, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

On Tue, Mar 18, 2014 at 05:23:37PM -0700, Andy Lutomirski wrote:
> On Tue, Mar 18, 2014 at 5:18 PM, Minchan Kim <minchan@kernel.org> wrote:
> > Hello,
> >
> > On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
> >> On 03/13/2014 11:37 PM, Minchan Kim wrote:
> >> > This patch is an attempt to support MADV_FREE for Linux.
> >> >
> >> > Rationale is following as.
> >> >
> >> > Allocators call munmap(2) when user call free(3) if ptr is
> >> > in mmaped area. But munmap isn't cheap because it have to clean up
> >> > all pte entries, unlinking a vma and returns free pages to buddy
> >> > so overhead would be increased linearly by mmaped area's size.
> >> > So they like madvise_dontneed rather than munmap.
> >> >
> >> > "dontneed" holds read-side lock of mmap_sem so other threads
> >> > of the process could go with concurrent page faults so it is
> >> > better than munmap if it's not lack of address space.
> >> > But the problem is that most of allocator reuses that address
> >> > space soonish so applications see page fault, page allocation,
> >> > page zeroing if allocator already called madvise_dontneed
> >> > on the address space.
> >> >
> >> > For avoidng that overheads, other OS have supported MADV_FREE.
> >> > The idea is just mark pages as lazyfree when madvise called
> >> > and purge them if memory pressure happens. Otherwise, VM doesn't
> >> > detach pages on the address space so application could use
> >> > that memory space without above overheads.
> >>
> >> I must be missing something.
> >>
> >> If the application issues MADV_FREE and then writes to the MADV_FREEd
> >> range, the kernel needs to know that the pages are no longer safe to
> >> lazily free.  This would presumably happen via a page fault on write.
> >> For that to happen reliably, the kernel has to write protect the pages
> >> when MADV_FREE is called, which in turn requires flushing the TLBs.
> >
> > It could be done by pte_dirty bit check. Of course, if some architectures
> > don't support it by H/W, pte_mkdirty would make it CoW as you said.
> 
> If the page already has dirty PTEs, then you need to clear the dirty
> bits and flush TLBs so that other CPUs notice that the PTEs are clean,
> I think.
> 
> Also, this has very odd semantics wrt reading the page after MADV_FREE
> -- is reading the page guaranteed to un-free it?

MADV_FREE simply invalidates content.  Sure, you can read at a given
address repeatedly after it.  You might see a different page every
time you do it, but it doesn't matter; the content is undefined.

It's no different than doing malloc() and looking at the memory before
writing anything in it.  After MADV_FREE, the memory is like a freshly
malloc'd chunk: the first access may result in page faults and the
content is undefined until you write it.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC 0/6] mm: support madvise(MADV_FREE)
@ 2014-03-19  5:15         ` Johannes Weiner
  0 siblings, 0 replies; 35+ messages in thread
From: Johannes Weiner @ 2014-03-19  5:15 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Minchan Kim, Andrew Morton, Rik van Riel, Mel Gorman,
	Hugh Dickins, Dave Hansen, KOSAKI Motohiro, linux-mm,
	linux-kernel, John Stultz, Jason Evans

On Tue, Mar 18, 2014 at 05:23:37PM -0700, Andy Lutomirski wrote:
> On Tue, Mar 18, 2014 at 5:18 PM, Minchan Kim <minchan@kernel.org> wrote:
> > Hello,
> >
> > On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
> >> On 03/13/2014 11:37 PM, Minchan Kim wrote:
> >> > This patch is an attempt to support MADV_FREE for Linux.
> >> >
> >> > Rationale is following as.
> >> >
> >> > Allocators call munmap(2) when user call free(3) if ptr is
> >> > in mmaped area. But munmap isn't cheap because it have to clean up
> >> > all pte entries, unlinking a vma and returns free pages to buddy
> >> > so overhead would be increased linearly by mmaped area's size.
> >> > So they like madvise_dontneed rather than munmap.
> >> >
> >> > "dontneed" holds read-side lock of mmap_sem so other threads
> >> > of the process could go with concurrent page faults so it is
> >> > better than munmap if it's not lack of address space.
> >> > But the problem is that most of allocator reuses that address
> >> > space soonish so applications see page fault, page allocation,
> >> > page zeroing if allocator already called madvise_dontneed
> >> > on the address space.
> >> >
> >> > For avoidng that overheads, other OS have supported MADV_FREE.
> >> > The idea is just mark pages as lazyfree when madvise called
> >> > and purge them if memory pressure happens. Otherwise, VM doesn't
> >> > detach pages on the address space so application could use
> >> > that memory space without above overheads.
> >>
> >> I must be missing something.
> >>
> >> If the application issues MADV_FREE and then writes to the MADV_FREEd
> >> range, the kernel needs to know that the pages are no longer safe to
> >> lazily free.  This would presumably happen via a page fault on write.
> >> For that to happen reliably, the kernel has to write protect the pages
> >> when MADV_FREE is called, which in turn requires flushing the TLBs.
> >
> > It could be done by pte_dirty bit check. Of course, if some architectures
> > don't support it by H/W, pte_mkdirty would make it CoW as you said.
> 
> If the page already has dirty PTEs, then you need to clear the dirty
> bits and flush TLBs so that other CPUs notice that the PTEs are clean,
> I think.
> 
> Also, this has very odd semantics wrt reading the page after MADV_FREE
> -- is reading the page guaranteed to un-free it?

MADV_FREE simply invalidates content.  Sure, you can read at a given
address repeatedly after it.  You might see a different page every
time you do it, but it doesn't matter; the content is undefined.

It's no different than doing malloc() and looking at the memory before
writing anything in it.  After MADV_FREE, the memory is like a freshly
malloc'd chunk: the first access may result in page faults and the
content is undefined until you write it.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2014-03-19  5:16 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-14  6:37 [RFC 0/6] mm: support madvise(MADV_FREE) Minchan Kim
2014-03-14  6:37 ` Minchan Kim
2014-03-14  6:37 ` [RFC 1/6] mm: clean up PAGE_MAPPING_FLAGS Minchan Kim
2014-03-14  6:37   ` Minchan Kim
2014-03-14  6:37 ` [RFC 2/6] mm: work deactivate_page with anon pages Minchan Kim
2014-03-14  6:37   ` Minchan Kim
2014-03-14  6:37 ` [RFC 3/6] mm: support madvise(MADV_FREE) Minchan Kim
2014-03-14  6:37   ` Minchan Kim
2014-03-14  7:49   ` Minchan Kim
2014-03-14  7:49     ` Minchan Kim
2014-03-14 13:33   ` Kirill A. Shutemov
2014-03-14 13:33     ` Kirill A. Shutemov
2014-03-14 15:24     ` Minchan Kim
2014-03-14 15:24       ` Minchan Kim
2014-03-18 18:26   ` Johannes Weiner
2014-03-18 18:26     ` Johannes Weiner
2014-03-19  1:22     ` Minchan Kim
2014-03-14  6:37 ` [RFC 4/6] mm: add stat about lazyfree pages Minchan Kim
2014-03-14  6:37   ` Minchan Kim
2014-03-14  6:37 ` [RFC 5/6] mm: reclaim lazyfree pages in swapless system Minchan Kim
2014-03-14  6:37   ` Minchan Kim
2014-03-14  6:37 ` [RFC 6/6] mm: ksm: don't merge lazyfree page Minchan Kim
2014-03-14  6:37   ` Minchan Kim
2014-03-14  7:37 ` [RFC 0/6] mm: support madvise(MADV_FREE) Zhang Yanfei
2014-03-14  7:37   ` Zhang Yanfei
2014-03-14  7:56   ` Minchan Kim
2014-03-14  7:56     ` Minchan Kim
2014-03-18 17:55 ` Andy Lutomirski
2014-03-18 17:55   ` Andy Lutomirski
2014-03-19  0:18   ` Minchan Kim
2014-03-19  0:23     ` Andy Lutomirski
2014-03-19  0:23       ` Andy Lutomirski
2014-03-19  1:02       ` Minchan Kim
2014-03-19  5:15       ` Johannes Weiner
2014-03-19  5:15         ` Johannes Weiner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.