All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] hugetlbfs: dirty pages as they are added to pagecache
@ 2018-10-18  4:10 Mike Kravetz
  2018-10-18 23:08 ` Andrew Morton
  2018-10-23  7:43 ` Michal Hocko
  0 siblings, 2 replies; 10+ messages in thread
From: Mike Kravetz @ 2018-10-18  4:10 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Andrew Morton, Michal Hocko, Hugh Dickins, Naoya Horiguchi,
	Aneesh Kumar K . V, Andrea Arcangeli, Kirill A . Shutemov,
	Davidlohr Bueso, Alexander Viro, Mike Kravetz, stable

Some test systems were experiencing negative huge page reserve
counts and incorrect file block counts.  This was traced to
/proc/sys/vm/drop_caches removing clean pages from hugetlbfs
file pagecaches.  When non-hugetlbfs explicit code removes the
pages, the appropriate accounting is not performed.

This can be recreated as follows:
 fallocate -l 2M /dev/hugepages/foo
 echo 1 > /proc/sys/vm/drop_caches
 fallocate -l 2M /dev/hugepages/foo
 grep -i huge /proc/meminfo
   AnonHugePages:         0 kB
   ShmemHugePages:        0 kB
   HugePages_Total:    2048
   HugePages_Free:     2047
   HugePages_Rsvd:    18446744073709551615
   HugePages_Surp:        0
   Hugepagesize:       2048 kB
   Hugetlb:         4194304 kB
 ls -lsh /dev/hugepages/foo
   4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo

To address this issue, dirty pages as they are added to pagecache.
This can easily be reproduced with fallocate as shown above. Read
faulted pages will eventually end up being marked dirty.  But there
is a window where they are clean and could be impacted by code such
as drop_caches.  So, just dirty them all as they are added to the
pagecache.

In addition, it makes little sense to even try to drop hugetlbfs
pagecache pages, so disable calls to these filesystems in drop_caches
code.

Fixes: 70c3547e36f5 ("hugetlbfs: add hugetlbfs_fallocate()")
Cc: stable@vger.kernel.org
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 fs/drop_caches.c | 7 +++++++
 mm/hugetlb.c     | 6 ++++++
 2 files changed, 13 insertions(+)

diff --git a/fs/drop_caches.c b/fs/drop_caches.c
index 82377017130f..b72c5bc502a8 100644
--- a/fs/drop_caches.c
+++ b/fs/drop_caches.c
@@ -9,6 +9,7 @@
 #include <linux/writeback.h>
 #include <linux/sysctl.h>
 #include <linux/gfp.h>
+#include <linux/magic.h>
 #include "internal.h"
 
 /* A global variable is a bit ugly, but it keeps the code simple */
@@ -18,6 +19,12 @@ static void drop_pagecache_sb(struct super_block *sb, void *unused)
 {
 	struct inode *inode, *toput_inode = NULL;
 
+	/*
+	 * It makes no sense to try and drop hugetlbfs page cache pages.
+	 */
+	if (sb->s_magic == HUGETLBFS_MAGIC)
+		return;
+
 	spin_lock(&sb->s_inode_list_lock);
 	list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
 		spin_lock(&inode->i_lock);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 5c390f5a5207..7b5c0ad9a6bd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3690,6 +3690,12 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
 		return err;
 	ClearPagePrivate(page);
 
+	/*
+	 * set page dirty so that it will not be removed from cache/file
+	 * by non-hugetlbfs specific code paths.
+	 */
+	set_page_dirty(page);
+
 	spin_lock(&inode->i_lock);
 	inode->i_blocks += blocks_per_huge_page(h);
 	spin_unlock(&inode->i_lock);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
  2018-10-18  4:10 [PATCH] hugetlbfs: dirty pages as they are added to pagecache Mike Kravetz
@ 2018-10-18 23:08 ` Andrew Morton
  2018-10-18 23:16   ` Mike Kravetz
  2018-10-23  7:43 ` Michal Hocko
  1 sibling, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2018-10-18 23:08 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, Michal Hocko, Hugh Dickins,
	Naoya Horiguchi, Aneesh Kumar K . V, Andrea Arcangeli,
	Kirill A . Shutemov, Davidlohr Bueso, Alexander Viro, stable

On Wed, 17 Oct 2018 21:10:22 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote:

> Some test systems were experiencing negative huge page reserve
> counts and incorrect file block counts.  This was traced to
> /proc/sys/vm/drop_caches removing clean pages from hugetlbfs
> file pagecaches.  When non-hugetlbfs explicit code removes the
> pages, the appropriate accounting is not performed.
> 
> This can be recreated as follows:
>  fallocate -l 2M /dev/hugepages/foo
>  echo 1 > /proc/sys/vm/drop_caches
>  fallocate -l 2M /dev/hugepages/foo
>  grep -i huge /proc/meminfo
>    AnonHugePages:         0 kB
>    ShmemHugePages:        0 kB
>    HugePages_Total:    2048
>    HugePages_Free:     2047
>    HugePages_Rsvd:    18446744073709551615
>    HugePages_Surp:        0
>    Hugepagesize:       2048 kB
>    Hugetlb:         4194304 kB
>  ls -lsh /dev/hugepages/foo
>    4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo
> 
> To address this issue, dirty pages as they are added to pagecache.
> This can easily be reproduced with fallocate as shown above. Read
> faulted pages will eventually end up being marked dirty.  But there
> is a window where they are clean and could be impacted by code such
> as drop_caches.  So, just dirty them all as they are added to the
> pagecache.
> 
> In addition, it makes little sense to even try to drop hugetlbfs
> pagecache pages, so disable calls to these filesystems in drop_caches
> code.
> 
> ...
>
> --- a/fs/drop_caches.c
> +++ b/fs/drop_caches.c
> @@ -9,6 +9,7 @@
>  #include <linux/writeback.h>
>  #include <linux/sysctl.h>
>  #include <linux/gfp.h>
> +#include <linux/magic.h>
>  #include "internal.h"
>  
>  /* A global variable is a bit ugly, but it keeps the code simple */
> @@ -18,6 +19,12 @@ static void drop_pagecache_sb(struct super_block *sb, void *unused)
>  {
>  	struct inode *inode, *toput_inode = NULL;
>  
> +	/*
> +	 * It makes no sense to try and drop hugetlbfs page cache pages.
> +	 */
> +	if (sb->s_magic == HUGETLBFS_MAGIC)
> +		return;

Hardcoding hugetlbfs seems wrong here.  There are other filesystems
where it makes no sense to try to drop pagecache.  ramfs and, errrr...

I'm struggling to remember which is the correct thing to test here. 
BDI_CAP_NO_WRITEBACK should get us there, but doesn't seem quite
appropriate.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
  2018-10-18 23:08 ` Andrew Morton
@ 2018-10-18 23:16   ` Mike Kravetz
  2018-10-19  0:46     ` Andrea Arcangeli
  0 siblings, 1 reply; 10+ messages in thread
From: Mike Kravetz @ 2018-10-18 23:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Michal Hocko, Hugh Dickins,
	Naoya Horiguchi, Aneesh Kumar K . V, Andrea Arcangeli,
	Kirill A . Shutemov, Davidlohr Bueso, Alexander Viro, stable

On 10/18/18 4:08 PM, Andrew Morton wrote:
> On Wed, 17 Oct 2018 21:10:22 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote:
> 
>> Some test systems were experiencing negative huge page reserve
>> counts and incorrect file block counts.  This was traced to
>> /proc/sys/vm/drop_caches removing clean pages from hugetlbfs
>> file pagecaches.  When non-hugetlbfs explicit code removes the
>> pages, the appropriate accounting is not performed.
>>
>> This can be recreated as follows:
>>  fallocate -l 2M /dev/hugepages/foo
>>  echo 1 > /proc/sys/vm/drop_caches
>>  fallocate -l 2M /dev/hugepages/foo
>>  grep -i huge /proc/meminfo
>>    AnonHugePages:         0 kB
>>    ShmemHugePages:        0 kB
>>    HugePages_Total:    2048
>>    HugePages_Free:     2047
>>    HugePages_Rsvd:    18446744073709551615
>>    HugePages_Surp:        0
>>    Hugepagesize:       2048 kB
>>    Hugetlb:         4194304 kB
>>  ls -lsh /dev/hugepages/foo
>>    4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo
>>
>> To address this issue, dirty pages as they are added to pagecache.
>> This can easily be reproduced with fallocate as shown above. Read
>> faulted pages will eventually end up being marked dirty.  But there
>> is a window where they are clean and could be impacted by code such
>> as drop_caches.  So, just dirty them all as they are added to the
>> pagecache.
>>
>> In addition, it makes little sense to even try to drop hugetlbfs
>> pagecache pages, so disable calls to these filesystems in drop_caches
>> code.
>>
>> ...
>>
>> --- a/fs/drop_caches.c
>> +++ b/fs/drop_caches.c
>> @@ -9,6 +9,7 @@
>>  #include <linux/writeback.h>
>>  #include <linux/sysctl.h>
>>  #include <linux/gfp.h>
>> +#include <linux/magic.h>
>>  #include "internal.h"
>>  
>>  /* A global variable is a bit ugly, but it keeps the code simple */
>> @@ -18,6 +19,12 @@ static void drop_pagecache_sb(struct super_block *sb, void *unused)
>>  {
>>  	struct inode *inode, *toput_inode = NULL;
>>  
>> +	/*
>> +	 * It makes no sense to try and drop hugetlbfs page cache pages.
>> +	 */
>> +	if (sb->s_magic == HUGETLBFS_MAGIC)
>> +		return;
> 
> Hardcoding hugetlbfs seems wrong here.  There are other filesystems
> where it makes no sense to try to drop pagecache.  ramfs and, errrr...
> 
> I'm struggling to remember which is the correct thing to test here. 
> BDI_CAP_NO_WRITEBACK should get us there, but doesn't seem quite
> appropriate.

I was not sure about this, and expected someone could come up with
something better.  It just seems there are filesystems like huegtlbfs,
where it makes no sense wasting cycles traversing the filesystem.  So,
let's not even try.

Hoping someone can come up with a better method than hard coding as
I have done above.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
  2018-10-18 23:16   ` Mike Kravetz
@ 2018-10-19  0:46     ` Andrea Arcangeli
  2018-10-19  1:47       ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: Andrea Arcangeli @ 2018-10-19  0:46 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Hugh Dickins, Naoya Horiguchi, Aneesh Kumar K . V,
	Kirill A . Shutemov, Davidlohr Bueso, Alexander Viro, stable

On Thu, Oct 18, 2018 at 04:16:40PM -0700, Mike Kravetz wrote:
> I was not sure about this, and expected someone could come up with
> something better.  It just seems there are filesystems like huegtlbfs,
> where it makes no sense wasting cycles traversing the filesystem.  So,
> let's not even try.
> 
> Hoping someone can come up with a better method than hard coding as
> I have done above.

It's not strictly required after marking the pages dirty though. The
real fix is the other one? Could we just drop the hardcoding and let
it run after the real fix is applied?

The performance of drop_caches doesn't seem critical, especially with
gigapages. tmpfs doesn't seem to be optimized away from drop_caches
and the gain would be bigger for tmpfs if THP is not enabled in the
mount, so I'm not sure if we should worry about hugetlbfs first.

Thanks,
Andrea

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
  2018-10-19  0:46     ` Andrea Arcangeli
@ 2018-10-19  1:47       ` Andrew Morton
  2018-10-19  4:50         ` Mike Kravetz
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2018-10-19  1:47 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: Mike Kravetz, linux-mm, linux-kernel, Michal Hocko, Hugh Dickins,
	Naoya Horiguchi, Aneesh Kumar K . V, Kirill A . Shutemov,
	Davidlohr Bueso, Alexander Viro, stable

On Thu, 18 Oct 2018 20:46:21 -0400 Andrea Arcangeli <aarcange@redhat.com> wrote:

> On Thu, Oct 18, 2018 at 04:16:40PM -0700, Mike Kravetz wrote:
> > I was not sure about this, and expected someone could come up with
> > something better.  It just seems there are filesystems like huegtlbfs,
> > where it makes no sense wasting cycles traversing the filesystem.  So,
> > let's not even try.
> > 
> > Hoping someone can come up with a better method than hard coding as
> > I have done above.
> 
> It's not strictly required after marking the pages dirty though. The
> real fix is the other one? Could we just drop the hardcoding and let
> it run after the real fix is applied?
> 
> The performance of drop_caches doesn't seem critical, especially with
> gigapages. tmpfs doesn't seem to be optimized away from drop_caches
> and the gain would be bigger for tmpfs if THP is not enabled in the
> mount, so I'm not sure if we should worry about hugetlbfs first.

I guess so.  I can't immediately see a clean way of expressing this so
perhaps it would need a new BDI_CAP_NO_BACKING_STORE.  Such a
thing hardly seems worthwhile for drop_caches.

And drop_caches really shouldn't be there anyway.  It's a standing
workaround for ongoing suckage in pagecache and metadata reclaim
behaviour :(


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
  2018-10-19  1:47       ` Andrew Morton
@ 2018-10-19  4:50         ` Mike Kravetz
  0 siblings, 0 replies; 10+ messages in thread
From: Mike Kravetz @ 2018-10-19  4:50 UTC (permalink / raw)
  To: Andrew Morton, Andrea Arcangeli
  Cc: linux-mm, linux-kernel, Michal Hocko, Hugh Dickins,
	Naoya Horiguchi, Aneesh Kumar K . V, Kirill A . Shutemov,
	Davidlohr Bueso, Alexander Viro, stable

On 10/18/18 6:47 PM, Andrew Morton wrote:
> On Thu, 18 Oct 2018 20:46:21 -0400 Andrea Arcangeli <aarcange@redhat.com> wrote:
> 
>> On Thu, Oct 18, 2018 at 04:16:40PM -0700, Mike Kravetz wrote:
>>> I was not sure about this, and expected someone could come up with
>>> something better.  It just seems there are filesystems like huegtlbfs,
>>> where it makes no sense wasting cycles traversing the filesystem.  So,
>>> let's not even try.
>>>
>>> Hoping someone can come up with a better method than hard coding as
>>> I have done above.
>>
>> It's not strictly required after marking the pages dirty though. The
>> real fix is the other one? Could we just drop the hardcoding and let
>> it run after the real fix is applied?

Yeah.  The other part of the patch is the real fix.  This drop_caches
part is not necessary.

>> The performance of drop_caches doesn't seem critical, especially with
>> gigapages. tmpfs doesn't seem to be optimized away from drop_caches
>> and the gain would be bigger for tmpfs if THP is not enabled in the
>> mount, so I'm not sure if we should worry about hugetlbfs first.
> 
> I guess so.  I can't immediately see a clean way of expressing this so
> perhaps it would need a new BDI_CAP_NO_BACKING_STORE.  Such a
> thing hardly seems worthwhile for drop_caches.
> 
> And drop_caches really shouldn't be there anyway.  It's a standing
> workaround for ongoing suckage in pagecache and metadata reclaim
> behaviour :(

I'm OK with dropping the other part.  It just seemed like there was no
real reason to try and drop_caches for hugetlbfs (and perhaps others).

Andrew, would you like another version?  Or can you just drop the
fs/drop_caches.c part?

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
  2018-10-18  4:10 [PATCH] hugetlbfs: dirty pages as they are added to pagecache Mike Kravetz
  2018-10-18 23:08 ` Andrew Morton
@ 2018-10-23  7:43 ` Michal Hocko
  2018-10-23 17:30   ` Mike Kravetz
  1 sibling, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2018-10-23  7:43 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, Andrew Morton, Hugh Dickins,
	Naoya Horiguchi, Aneesh Kumar K . V, Andrea Arcangeli,
	Kirill A . Shutemov, Davidlohr Bueso, Alexander Viro, stable

On Wed 17-10-18 21:10:22, Mike Kravetz wrote:
> Some test systems were experiencing negative huge page reserve
> counts and incorrect file block counts.  This was traced to
> /proc/sys/vm/drop_caches removing clean pages from hugetlbfs
> file pagecaches.  When non-hugetlbfs explicit code removes the
> pages, the appropriate accounting is not performed.
> 
> This can be recreated as follows:
>  fallocate -l 2M /dev/hugepages/foo
>  echo 1 > /proc/sys/vm/drop_caches
>  fallocate -l 2M /dev/hugepages/foo
>  grep -i huge /proc/meminfo
>    AnonHugePages:         0 kB
>    ShmemHugePages:        0 kB
>    HugePages_Total:    2048
>    HugePages_Free:     2047
>    HugePages_Rsvd:    18446744073709551615
>    HugePages_Surp:        0
>    Hugepagesize:       2048 kB
>    Hugetlb:         4194304 kB
>  ls -lsh /dev/hugepages/foo
>    4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo
> 
> To address this issue, dirty pages as they are added to pagecache.
> This can easily be reproduced with fallocate as shown above. Read
> faulted pages will eventually end up being marked dirty.  But there
> is a window where they are clean and could be impacted by code such
> as drop_caches.  So, just dirty them all as they are added to the
> pagecache.
> 
> In addition, it makes little sense to even try to drop hugetlbfs
> pagecache pages, so disable calls to these filesystems in drop_caches
> code.
> 
> Fixes: 70c3547e36f5 ("hugetlbfs: add hugetlbfs_fallocate()")
> Cc: stable@vger.kernel.org
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>

I do agree with others that HUGETLBFS_MAGIC check in drop_pagecache_sb
is wrong in principal. I am not even sure we want to special case memory
backed filesystems. What if we ever implement MADV_FREE on fs? Should
those pages be dropped? My first idea take would be yes.

Acked-by: Michal Hocko <mhocko@suse.com> to the set_page_dirty dirty
part.

Although I am wondering why you haven't covered only the fallocate path
wrt Fixes tag. In other words, do we need the same treatment for the
page fault path? We do not set dirty bit on page there as well. We rely
on the dirty bit in pte and only for writable mappings. I have hard time
to see why we have been safe there as well. So maybe it is your Fixes:
tag which is not entirely correct, or I am simply missing the fault
path.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
  2018-10-23  7:43 ` Michal Hocko
@ 2018-10-23 17:30   ` Mike Kravetz
  2018-10-23 17:41     ` Michal Hocko
  2018-10-24  5:00     ` Khalid Aziz
  0 siblings, 2 replies; 10+ messages in thread
From: Mike Kravetz @ 2018-10-23 17:30 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, Andrew Morton, Hugh Dickins,
	Naoya Horiguchi, Aneesh Kumar K . V, Andrea Arcangeli,
	Kirill A . Shutemov, Davidlohr Bueso, Alexander Viro, stable

On 10/23/18 12:43 AM, Michal Hocko wrote:
> On Wed 17-10-18 21:10:22, Mike Kravetz wrote:
>> Some test systems were experiencing negative huge page reserve
>> counts and incorrect file block counts.  This was traced to
>> /proc/sys/vm/drop_caches removing clean pages from hugetlbfs
>> file pagecaches.  When non-hugetlbfs explicit code removes the
>> pages, the appropriate accounting is not performed.
>>
>> This can be recreated as follows:
>>  fallocate -l 2M /dev/hugepages/foo
>>  echo 1 > /proc/sys/vm/drop_caches
>>  fallocate -l 2M /dev/hugepages/foo
>>  grep -i huge /proc/meminfo
>>    AnonHugePages:         0 kB
>>    ShmemHugePages:        0 kB
>>    HugePages_Total:    2048
>>    HugePages_Free:     2047
>>    HugePages_Rsvd:    18446744073709551615
>>    HugePages_Surp:        0
>>    Hugepagesize:       2048 kB
>>    Hugetlb:         4194304 kB
>>  ls -lsh /dev/hugepages/foo
>>    4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo
>>
>> To address this issue, dirty pages as they are added to pagecache.
>> This can easily be reproduced with fallocate as shown above. Read
>> faulted pages will eventually end up being marked dirty.  But there
>> is a window where they are clean and could be impacted by code such
>> as drop_caches.  So, just dirty them all as they are added to the
>> pagecache.
>>
>> In addition, it makes little sense to even try to drop hugetlbfs
>> pagecache pages, so disable calls to these filesystems in drop_caches
>> code.
>>
>> Fixes: 70c3547e36f5 ("hugetlbfs: add hugetlbfs_fallocate()")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> 
> I do agree with others that HUGETLBFS_MAGIC check in drop_pagecache_sb
> is wrong in principal. I am not even sure we want to special case memory
> backed filesystems. What if we ever implement MADV_FREE on fs? Should
> those pages be dropped? My first idea take would be yes.

Ok, I have removed that hard coded check.  Implementing MADV_FREE on
hugetlbfs would take some work, but it could be done.

> Acked-by: Michal Hocko <mhocko@suse.com> to the set_page_dirty dirty
> part.
> 
> Although I am wondering why you haven't covered only the fallocate path
> wrt Fixes tag. In other words, do we need the same treatment for the
> page fault path? We do not set dirty bit on page there as well. We rely
> on the dirty bit in pte and only for writable mappings. I have hard time
> to see why we have been safe there as well. So maybe it is your Fixes:
> tag which is not entirely correct, or I am simply missing the fault
> path.

No, you are not missing anything.  In the commit log I mentioned that this
also does apply to the fault path.  The change takes care of them both.

I was struggling with what to put in the fixes tag.  As mentioned, this
problem also exists in the fault path.  Since 3.16 is the oldest stable
release, I went back and used the commit next to the add_to_page_cache code
there.  However, that seems kind of random.  Is there a better way to say
the patch applies to all stable releases?

Here is updated patch without the drop_caches change and updated fixes tag.

From: Mike Kravetz <mike.kravetz@oracle.com>

hugetlbfs: dirty pages as they are added to pagecache

Some test systems were experiencing negative huge page reserve
counts and incorrect file block counts.  This was traced to
/proc/sys/vm/drop_caches removing clean pages from hugetlbfs
file pagecaches.  When non-hugetlbfs explicit code removes the
pages, the appropriate accounting is not performed.

This can be recreated as follows:
 fallocate -l 2M /dev/hugepages/foo
 echo 1 > /proc/sys/vm/drop_caches
 fallocate -l 2M /dev/hugepages/foo
 grep -i huge /proc/meminfo
   AnonHugePages:         0 kB
   ShmemHugePages:        0 kB
   HugePages_Total:    2048
   HugePages_Free:     2047
   HugePages_Rsvd:    18446744073709551615
   HugePages_Surp:        0
   Hugepagesize:       2048 kB
   Hugetlb:         4194304 kB
 ls -lsh /dev/hugepages/foo
   4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo

To address this issue, dirty pages as they are added to pagecache.
This can easily be reproduced with fallocate as shown above. Read
faulted pages will eventually end up being marked dirty.  But there
is a window where they are clean and could be impacted by code such
as drop_caches.  So, just dirty them all as they are added to the
pagecache.

Fixes: 6bda666a03f0 ("hugepages: fold find_or_alloc_pages into huge_no_page()")
Cc: stable@vger.kernel.org
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/hugetlb.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 5c390f5a5207..7b5c0ad9a6bd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3690,6 +3690,12 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
 		return err;
 	ClearPagePrivate(page);
 
+	/*
+	 * set page dirty so that it will not be removed from cache/file
+	 * by non-hugetlbfs specific code paths.
+	 */
+	set_page_dirty(page);
+
 	spin_lock(&inode->i_lock);
 	inode->i_blocks += blocks_per_huge_page(h);
 	spin_unlock(&inode->i_lock);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
  2018-10-23 17:30   ` Mike Kravetz
@ 2018-10-23 17:41     ` Michal Hocko
  2018-10-24  5:00     ` Khalid Aziz
  1 sibling, 0 replies; 10+ messages in thread
From: Michal Hocko @ 2018-10-23 17:41 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, Andrew Morton, Hugh Dickins,
	Naoya Horiguchi, Aneesh Kumar K . V, Andrea Arcangeli,
	Kirill A . Shutemov, Davidlohr Bueso, Alexander Viro, stable

On Tue 23-10-18 10:30:44, Mike Kravetz wrote:
> On 10/23/18 12:43 AM, Michal Hocko wrote:
> > On Wed 17-10-18 21:10:22, Mike Kravetz wrote:
> >> Some test systems were experiencing negative huge page reserve
> >> counts and incorrect file block counts.  This was traced to
> >> /proc/sys/vm/drop_caches removing clean pages from hugetlbfs
> >> file pagecaches.  When non-hugetlbfs explicit code removes the
> >> pages, the appropriate accounting is not performed.
> >>
> >> This can be recreated as follows:
> >>  fallocate -l 2M /dev/hugepages/foo
> >>  echo 1 > /proc/sys/vm/drop_caches
> >>  fallocate -l 2M /dev/hugepages/foo
> >>  grep -i huge /proc/meminfo
> >>    AnonHugePages:         0 kB
> >>    ShmemHugePages:        0 kB
> >>    HugePages_Total:    2048
> >>    HugePages_Free:     2047
> >>    HugePages_Rsvd:    18446744073709551615
> >>    HugePages_Surp:        0
> >>    Hugepagesize:       2048 kB
> >>    Hugetlb:         4194304 kB
> >>  ls -lsh /dev/hugepages/foo
> >>    4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo
> >>
> >> To address this issue, dirty pages as they are added to pagecache.
> >> This can easily be reproduced with fallocate as shown above. Read
> >> faulted pages will eventually end up being marked dirty.  But there
> >> is a window where they are clean and could be impacted by code such
> >> as drop_caches.  So, just dirty them all as they are added to the
> >> pagecache.
> >>
> >> In addition, it makes little sense to even try to drop hugetlbfs
> >> pagecache pages, so disable calls to these filesystems in drop_caches
> >> code.
> >>
> >> Fixes: 70c3547e36f5 ("hugetlbfs: add hugetlbfs_fallocate()")
> >> Cc: stable@vger.kernel.org
> >> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> > 
> > I do agree with others that HUGETLBFS_MAGIC check in drop_pagecache_sb
> > is wrong in principal. I am not even sure we want to special case memory
> > backed filesystems. What if we ever implement MADV_FREE on fs? Should
> > those pages be dropped? My first idea take would be yes.
> 
> Ok, I have removed that hard coded check.  Implementing MADV_FREE on
> hugetlbfs would take some work, but it could be done.
> 
> > Acked-by: Michal Hocko <mhocko@suse.com> to the set_page_dirty dirty
> > part.
> > 
> > Although I am wondering why you haven't covered only the fallocate path
> > wrt Fixes tag. In other words, do we need the same treatment for the
> > page fault path? We do not set dirty bit on page there as well. We rely
> > on the dirty bit in pte and only for writable mappings. I have hard time
> > to see why we have been safe there as well. So maybe it is your Fixes:
> > tag which is not entirely correct, or I am simply missing the fault
> > path.
> 
> No, you are not missing anything.  In the commit log I mentioned that this
> also does apply to the fault path.  The change takes care of them both.
> 
> I was struggling with what to put in the fixes tag.  As mentioned, this
> problem also exists in the fault path.  Since 3.16 is the oldest stable
> release, I went back and used the commit next to the add_to_page_cache code
> there.  However, that seems kind of random.  Is there a better way to say
> the patch applies to all stable releases?

OK, good, I was afraid I was missing something, well except for not
reading the changelog properly. I would go with

Cc: stable # all kernels with hugetlb

> Here is updated patch without the drop_caches change and updated fixes tag.
> 
> From: Mike Kravetz <mike.kravetz@oracle.com>
> 
> hugetlbfs: dirty pages as they are added to pagecache
> 
> Some test systems were experiencing negative huge page reserve
> counts and incorrect file block counts.  This was traced to
> /proc/sys/vm/drop_caches removing clean pages from hugetlbfs
> file pagecaches.  When non-hugetlbfs explicit code removes the
> pages, the appropriate accounting is not performed.
> 
> This can be recreated as follows:
>  fallocate -l 2M /dev/hugepages/foo
>  echo 1 > /proc/sys/vm/drop_caches
>  fallocate -l 2M /dev/hugepages/foo
>  grep -i huge /proc/meminfo
>    AnonHugePages:         0 kB
>    ShmemHugePages:        0 kB
>    HugePages_Total:    2048
>    HugePages_Free:     2047
>    HugePages_Rsvd:    18446744073709551615
>    HugePages_Surp:        0
>    Hugepagesize:       2048 kB
>    Hugetlb:         4194304 kB
>  ls -lsh /dev/hugepages/foo
>    4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo
> 
> To address this issue, dirty pages as they are added to pagecache.
> This can easily be reproduced with fallocate as shown above. Read
> faulted pages will eventually end up being marked dirty.  But there
> is a window where they are clean and could be impacted by code such
> as drop_caches.  So, just dirty them all as they are added to the
> pagecache.
> 
> Fixes: 6bda666a03f0 ("hugepages: fold find_or_alloc_pages into huge_no_page()")
> Cc: stable@vger.kernel.org
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>

Acked-by: Mihcla Hocko <mhocko@suse.com>

> ---
>  mm/hugetlb.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 5c390f5a5207..7b5c0ad9a6bd 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3690,6 +3690,12 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
>  		return err;
>  	ClearPagePrivate(page);
>  
> +	/*
> +	 * set page dirty so that it will not be removed from cache/file
> +	 * by non-hugetlbfs specific code paths.
> +	 */
> +	set_page_dirty(page);
> +
>  	spin_lock(&inode->i_lock);
>  	inode->i_blocks += blocks_per_huge_page(h);
>  	spin_unlock(&inode->i_lock);
> -- 
> 2.17.2

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
  2018-10-23 17:30   ` Mike Kravetz
  2018-10-23 17:41     ` Michal Hocko
@ 2018-10-24  5:00     ` Khalid Aziz
  1 sibling, 0 replies; 10+ messages in thread
From: Khalid Aziz @ 2018-10-24  5:00 UTC (permalink / raw)
  To: Mike Kravetz, Michal Hocko
  Cc: linux-mm, linux-kernel, Andrew Morton, Hugh Dickins,
	Naoya Horiguchi, Aneesh Kumar K . V, Andrea Arcangeli,
	Kirill A . Shutemov, Davidlohr Bueso, Alexander Viro, stable

On Tue, 2018-10-23 at 10:30 -0700, Mike Kravetz wrote:
> ..... snip....
> Here is updated patch without the drop_caches change and updated
> fixes tag.
> 
> From: Mike Kravetz <mike.kravetz@oracle.com>
> 
> hugetlbfs: dirty pages as they are added to pagecache
> 
> Some test systems were experiencing negative huge page reserve
> counts and incorrect file block counts.  This was traced to
> /proc/sys/vm/drop_caches removing clean pages from hugetlbfs
> file pagecaches.  When non-hugetlbfs explicit code removes the
> pages, the appropriate accounting is not performed.
> 
> This can be recreated as follows:
>  fallocate -l 2M /dev/hugepages/foo
>  echo 1 > /proc/sys/vm/drop_caches
>  fallocate -l 2M /dev/hugepages/foo
>  grep -i huge /proc/meminfo
>    AnonHugePages:         0 kB
>    ShmemHugePages:        0 kB
>    HugePages_Total:    2048
>    HugePages_Free:     2047
>    HugePages_Rsvd:    18446744073709551615
>    HugePages_Surp:        0
>    Hugepagesize:       2048 kB
>    Hugetlb:         4194304 kB
>  ls -lsh /dev/hugepages/foo
>    4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo
> 
> To address this issue, dirty pages as they are added to pagecache.
> This can easily be reproduced with fallocate as shown above. Read
> faulted pages will eventually end up being marked dirty.  But there
> is a window where they are clean and could be impacted by code such
> as drop_caches.  So, just dirty them all as they are added to the
> pagecache.
> 
> Fixes: 6bda666a03f0 ("hugepages: fold find_or_alloc_pages into
> huge_no_page()")
> Cc: stable@vger.kernel.org
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>  mm/hugetlb.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 5c390f5a5207..7b5c0ad9a6bd 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3690,6 +3690,12 @@ int huge_add_to_page_cache(struct page *page,
> struct address_space *mapping,
>  		return err;
>  	ClearPagePrivate(page);
>  
> +	/*
> +	 * set page dirty so that it will not be removed from
> cache/file
> +	 * by non-hugetlbfs specific code paths.
> +	 */
> +	set_page_dirty(page);
> +
>  	spin_lock(&inode->i_lock);
>  	inode->i_blocks += blocks_per_huge_page(h);
>  	spin_unlock(&inode->i_lock);

This looks good.

Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>

--
Khalid

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-10-24  5:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-18  4:10 [PATCH] hugetlbfs: dirty pages as they are added to pagecache Mike Kravetz
2018-10-18 23:08 ` Andrew Morton
2018-10-18 23:16   ` Mike Kravetz
2018-10-19  0:46     ` Andrea Arcangeli
2018-10-19  1:47       ` Andrew Morton
2018-10-19  4:50         ` Mike Kravetz
2018-10-23  7:43 ` Michal Hocko
2018-10-23 17:30   ` Mike Kravetz
2018-10-23 17:41     ` Michal Hocko
2018-10-24  5:00     ` Khalid Aziz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.