All of lore.kernel.org
 help / color / mirror / Atom feed
* mmap with huge page
       [not found] <115606142.5883850.1531854314452.ref@mail.yahoo.com>
@ 2018-07-17 19:05   ` David Frank
  0 siblings, 0 replies; 9+ messages in thread
From: David Frank @ 2018-07-17 19:05 UTC (permalink / raw)
  To: Kernelnewbies, Linux-mm, LKML

[-- Attachment #1: Type: text/plain, Size: 656 bytes --]

Hi,According to the instruction, I have to mount a huge directory to hugetlbfs and create file in the huge directory to use the mmap huge page feature. But the issue is that, the files in the huge directory takes up the huge pages configured throughvm.nr_hugepages = 

even the files are not used.
When the total size of the files in the huge directory = vm.nr_hugepages * huge page size, then mmap would fail with 'can not allocate memory' if the file to be  mapped is in the huge dir or the call has HUGEPAGETLB flag.
Basically, I have to move the files off of the huge directory to free up huge pages.
Am I missing anything here?
Thanks,
David

[-- Attachment #2: Type: text/html, Size: 2187 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* mmap with huge page
@ 2018-07-17 19:05   ` David Frank
  0 siblings, 0 replies; 9+ messages in thread
From: David Frank @ 2018-07-17 19:05 UTC (permalink / raw)
  To: kernelnewbies

Hi,According to the instruction, I have to mount a huge directory to hugetlbfs and create file in the huge directory to use the mmap huge page feature. But the issue is that, the files in the huge directory takes up the huge pages configured throughvm.nr_hugepages = 

even the files are not used.
When the total size of the files in the huge directory = vm.nr_hugepages * huge page size, then mmap would fail with 'can not allocate memory' if the file to be? mapped is in the huge dir or the call has HUGEPAGETLB flag.
Basically, I have to move the files off of the huge directory to free up huge pages.
Am I missing anything here?
Thanks,
David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20180717/cbdef181/attachment.html>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mmap with huge page
  2018-07-17 19:05   ` David Frank
@ 2018-07-17 23:56     ` Mike Kravetz
  -1 siblings, 0 replies; 9+ messages in thread
From: Mike Kravetz @ 2018-07-17 23:56 UTC (permalink / raw)
  To: David Frank, Kernelnewbies, Linux-mm, LKML

On 07/17/2018 12:05 PM, David Frank wrote:
> Hi,
> According to the instruction, I have to mount a huge directory to hugetlbfs and create file in the huge directory to use the mmap huge page feature. But the issue is that, the files in the huge directory takes up the huge pages configured through
> vm.nr_hugepages =
> 
> even the files are not used.
> 
> When the total size of the files in the huge directory = vm.nr_hugepages * huge page size, then mmap would fail with 'can not allocate memory' if the file to be  mapped is in the huge dir or the call has HUGEPAGETLB flag.
> 
> Basically, I have to move the files off of the huge directory to free up huge pages.
> 
> Am I missing anything here?
> 

No, that is working as designed.

hugetlbfs filesystems are generally pre-allocated with nr_hugepages
huge pages.  That is the upper limit of huge pages available.  You can
use overcommit/surplus pages to try and exceed the limit, but that
comes with a whole set of potential issues.

If you have not done so already, please see Documentation/vm/hugetlbpage.txt
in the kernel source tree.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 9+ messages in thread

* mmap with huge page
@ 2018-07-17 23:56     ` Mike Kravetz
  0 siblings, 0 replies; 9+ messages in thread
From: Mike Kravetz @ 2018-07-17 23:56 UTC (permalink / raw)
  To: kernelnewbies

On 07/17/2018 12:05 PM, David Frank wrote:
> Hi,
> According to the instruction, I have to mount a huge directory to hugetlbfs and create file in the huge directory to use the mmap huge page feature. But the issue is that, the files in the huge directory takes up the huge pages configured through
> vm.nr_hugepages =
> 
> even the files are not used.
> 
> When the total size of the files in the huge directory = vm.nr_hugepages * huge page size, then mmap would fail with 'can not allocate memory' if the file to be  mapped is in the huge dir or the call has HUGEPAGETLB flag.
> 
> Basically, I have to move the files off of the huge directory to free up huge pages.
> 
> Am I missing anything here?
> 

No, that is working as designed.

hugetlbfs filesystems are generally pre-allocated with nr_hugepages
huge pages.  That is the upper limit of huge pages available.  You can
use overcommit/surplus pages to try and exceed the limit, but that
comes with a whole set of potential issues.

If you have not done so already, please see Documentation/vm/hugetlbpage.txt
in the kernel source tree.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mmap with huge page
  2018-07-17 23:56     ` Mike Kravetz
@ 2018-07-18  0:01       ` David Frank
  -1 siblings, 0 replies; 9+ messages in thread
From: David Frank @ 2018-07-18  0:01 UTC (permalink / raw)
  To: Kernelnewbies, Linux-mm, LKML, Mike Kravetz

[-- Attachment #1: Type: text/plain, Size: 1346 bytes --]

 Thanks Mike.  I read the doc, which is not explicit on the non used file taking up huge page count 
    On Tuesday, July 17, 2018, 4:57:04 PM PDT, Mike Kravetz <mike.kravetz@oracle.com> wrote:  
 
 On 07/17/2018 12:05 PM, David Frank wrote:
> Hi,
> According to the instruction, I have to mount a huge directory to hugetlbfs and create file in the huge directory to use the mmap huge page feature. But the issue is that, the files in the huge directory takes up the huge pages configured through
> vm.nr_hugepages =
> 
> even the files are not used.
> 
> When the total size of the files in the huge directory = vm.nr_hugepages * huge page size, then mmap would fail with 'can not allocate memory' if the file to be  mapped is in the huge dir or the call has HUGEPAGETLB flag.
> 
> Basically, I have to move the files off of the huge directory to free up huge pages.
> 
> Am I missing anything here?
> 

No, that is working as designed.

hugetlbfs filesystems are generally pre-allocated with nr_hugepages
huge pages.  That is the upper limit of huge pages available.  You can
use overcommit/surplus pages to try and exceed the limit, but that
comes with a whole set of potential issues.

If you have not done so already, please see Documentation/vm/hugetlbpage.txt
in the kernel source tree.
-- 
Mike Kravetz
  

[-- Attachment #2: Type: text/html, Size: 2630 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* mmap with huge page
@ 2018-07-18  0:01       ` David Frank
  0 siblings, 0 replies; 9+ messages in thread
From: David Frank @ 2018-07-18  0:01 UTC (permalink / raw)
  To: kernelnewbies

 Thanks Mike.? I read the doc, which is not explicit on the non used file taking up huge page count?
    On Tuesday, July 17, 2018, 4:57:04 PM PDT, Mike Kravetz <mike.kravetz@oracle.com> wrote:  
 
 On 07/17/2018 12:05 PM, David Frank wrote:
> Hi,
> According to the instruction, I have to mount a huge directory to hugetlbfs and create file in the huge directory to use the mmap huge page feature. But the issue is that, the files in the huge directory takes up the huge pages configured through
> vm.nr_hugepages =
> 
> even the files are not used.
> 
> When the total size of the files in the huge directory = vm.nr_hugepages * huge page size, then mmap would fail with 'can not allocate memory' if the file to be? mapped is in the huge dir or the call has HUGEPAGETLB flag.
> 
> Basically, I have to move the files off of the huge directory to free up huge pages.
> 
> Am I missing anything here?
> 

No, that is working as designed.

hugetlbfs filesystems are generally pre-allocated with nr_hugepages
huge pages.? That is the upper limit of huge pages available.? You can
use overcommit/surplus pages to try and exceed the limit, but that
comes with a whole set of potential issues.

If you have not done so already, please see Documentation/vm/hugetlbpage.txt
in the kernel source tree.
-- 
Mike Kravetz
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20180718/6e4b9cc0/attachment.html>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mmap with huge page
  2018-07-18  0:01       ` David Frank
  (?)
@ 2018-07-18 10:37         ` Michal Hocko
  -1 siblings, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2018-07-18 10:37 UTC (permalink / raw)
  To: David Frank; +Cc: Kernelnewbies, Linux-mm, LKML, Mike Kravetz

On Wed 18-07-18 00:01:10, David Frank wrote:
>  Thanks Mike.  I read the doc, which is not explicit on the non used file taking up huge page count 

What do you consider non user file? The file contains a data somebody
might want to read later. You cannot simply remove it. This is not
different to any other in memory filesystem (e.g. tmpfs, ramfs)
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mmap with huge page
@ 2018-07-18 10:37         ` Michal Hocko
  0 siblings, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2018-07-18 10:37 UTC (permalink / raw)
  To: David Frank; +Cc: Kernelnewbies, Linux-mm, LKML, Mike Kravetz

On Wed 18-07-18 00:01:10, David Frank wrote:
>  Thanks Mike.  I read the doc, which is not explicit on the non used file taking up huge page count 

What do you consider non user file? The file contains a data somebody
might want to read later. You cannot simply remove it. This is not
different to any other in memory filesystem (e.g. tmpfs, ramfs)
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* mmap with huge page
@ 2018-07-18 10:37         ` Michal Hocko
  0 siblings, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2018-07-18 10:37 UTC (permalink / raw)
  To: kernelnewbies

On Wed 18-07-18 00:01:10, David Frank wrote:
>  Thanks Mike.? I read the doc, which is not explicit on the non used file taking up huge page count?

What do you consider non user file? The file contains a data somebody
might want to read later. You cannot simply remove it. This is not
different to any other in memory filesystem (e.g. tmpfs, ramfs)
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-07-18 10:37 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <115606142.5883850.1531854314452.ref@mail.yahoo.com>
2018-07-17 19:05 ` mmap with huge page David Frank
2018-07-17 19:05   ` David Frank
2018-07-17 23:56   ` Mike Kravetz
2018-07-17 23:56     ` Mike Kravetz
2018-07-18  0:01     ` David Frank
2018-07-18  0:01       ` David Frank
2018-07-18 10:37       ` Michal Hocko
2018-07-18 10:37         ` Michal Hocko
2018-07-18 10:37         ` Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.