All of lore.kernel.org
 help / color / mirror / Atom feed
* Additional Huge Pages
@ 2022-07-22 17:20 Albert, Des
  2022-07-22 18:12 ` Mike Kravetz
  2022-07-25 15:28 ` Rongwei Wang
  0 siblings, 2 replies; 8+ messages in thread
From: Albert, Des @ 2022-07-22 17:20 UTC (permalink / raw)
  To: mike.kravetz, songmuchun, linux-mm


[-- Attachment #1.1: Type: text/plain, Size: 956 bytes --]

Hi

I am the Product Manager for the HPE Cray Operating System ( formerly Cray Linux Environment )

One of the features of this product is a component known as additional huge pages. This is kernel code that enables the selection of 'non-standard' huge page sizes.
For example, the current implementation allows for selection of huge page sizes of 2, 4, 8, 16, 32, 64, 128, 256 and 512 MB as well as 1 and 2 GB.

We are currently evaluating the concept of providing this code to kernel.org. I realize that this would require dedication of technical resources to work with maintainers.

I would like to know if there is interest in this suggestion. I realize that Transparent Huge Pages may be regarded as a more general approach to this requirement.

Thanks
Des

Dr Des Albert
HPC Software Product Manager
+1 360 312 6169
des.albert@hpe.com<mailto:des.albert@hpe.com>

[cid:image001.gif@01D89DAF.A3F91F80]  Hewlett Packard Enterprise


[-- Attachment #1.2: Type: text/html, Size: 6302 bytes --]

[-- Attachment #2: image001.gif --]
[-- Type: image/gif, Size: 240 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Additional Huge Pages
  2022-07-22 17:20 Additional Huge Pages Albert, Des
@ 2022-07-22 18:12 ` Mike Kravetz
  2022-07-22 19:20   ` Albert, Des
  2022-07-25 15:28 ` Rongwei Wang
  1 sibling, 1 reply; 8+ messages in thread
From: Mike Kravetz @ 2022-07-22 18:12 UTC (permalink / raw)
  To: Albert, Des; +Cc: songmuchun, linux-mm

On 07/22/22 17:20, Albert, Des wrote:
> Hi
> 
> I am the Product Manager for the HPE Cray Operating System ( formerly Cray Linux Environment )
> 
> One of the features of this product is a component known as additional huge pages. This is kernel code that enables the selection of 'non-standard' huge page sizes.
> For example, the current implementation allows for selection of huge page sizes of 2, 4, 8, 16, 32, 64, 128, 256 and 512 MB as well as 1 and 2 GB.
> 

Interesting.
Are these non-standard huge pages sizes targeted at application usage,
or internal kernel APIs.  If applications, what API is used?  Is it
similar/the same as hugetlb?

Within the kernel, support for 'arbitrary page sizes' is provided by the
folio abstraction.  hugetlb code will be moving to that in the future.
Any new code such as this whould be based on folios.

> We are currently evaluating the concept of providing this code to kernel.org. I realize that this would require dedication of technical resources to work with maintainers.
> 
> I would like to know if there is interest in this suggestion. I realize that Transparent Huge Pages may be regarded as a more general approach to this requirement.
> 

I guess interest would depend on the use cases and potential advantages
of this feature.  You should be able to speak to this based on your
current usage.
-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Additional Huge Pages
  2022-07-22 18:12 ` Mike Kravetz
@ 2022-07-22 19:20   ` Albert, Des
  2022-07-22 19:39     ` Matthew Wilcox
  0 siblings, 1 reply; 8+ messages in thread
From: Albert, Des @ 2022-07-22 19:20 UTC (permalink / raw)
  To: Mike Kravetz; +Cc: songmuchun, linux-mm

[-- Attachment #1: Type: text/plain, Size: 3233 bytes --]

Hi Mike

This code is targeted at application usage. Developers load an appropriate environment module then compile and link ( see attached man page )

As can be seen in this document, the features were initially developed to support proprietary Cray High Speed Networks ( Gemini and Aries ) and associated PGAS or SHMEM programming models. In this respect, it can be regarded as legacy code that has not kept pace with recent developments.

It has been challenging to determine the relevance of this feature to current hardware and applications. My requests to developers and benchmark specialists for information about the benefits it provides have not revealed much specific data. There is a general impression that it would be useful for GPU applications and MPI in large HPC systems but I suspect that it would require some very advanced knowledge of memory management for a developer to know precisely how and when to apply it.

This is the first time I have heard of the folio abstraction as the future for memory management. When you mention that future hugetbls work will be based on that concept, it seems unlikely that there would be interest in code that is not consistent with those developments. I also doubt that there would be a justification to 'update' the code to be consistent with future kernel developments.

I am therefore forming the impression that this idea may not be of interest to the Linux kernel community, however, I do not the detailed technical depth of the development team.

Do you have some more information about this folio abstraction plan ?

Des

-----Original Message-----
From: Mike Kravetz <mike.kravetz@oracle.com> 
Sent: Friday, July 22, 2022 11:12 AM
To: Albert, Des <des.albert@hpe.com>
Cc: songmuchun@bytedance.com; linux-mm@kvack.org
Subject: Re: Additional Huge Pages

On 07/22/22 17:20, Albert, Des wrote:
> Hi
> 
> I am the Product Manager for the HPE Cray Operating System ( formerly 
> Cray Linux Environment )
> 
> One of the features of this product is a component known as additional huge pages. This is kernel code that enables the selection of 'non-standard' huge page sizes.
> For example, the current implementation allows for selection of huge page sizes of 2, 4, 8, 16, 32, 64, 128, 256 and 512 MB as well as 1 and 2 GB.
> 

Interesting.
Are these non-standard huge pages sizes targeted at application usage, or internal kernel APIs.  If applications, what API is used?  Is it similar/the same as hugetlb?

Within the kernel, support for 'arbitrary page sizes' is provided by the folio abstraction.  hugetlb code will be moving to that in the future.
Any new code such as this whould be based on folios.

> We are currently evaluating the concept of providing this code to kernel.org. I realize that this would require dedication of technical resources to work with maintainers.
> 
> I would like to know if there is interest in this suggestion. I realize that Transparent Huge Pages may be regarded as a more general approach to this requirement.
> 

I guess interest would depend on the use cases and potential advantages of this feature.  You should be able to speak to this based on your current usage.
--
Mike Kravetz

[-- Attachment #2: man.txt --]
[-- Type: text/plain, Size: 21011 bytes --]

intro_hugepages(1)				     General Commands Manual				      intro_hugepages(1)

NAME
       intro_hugepages - Introduction to using huge pages

IMPLEMENTATION
       Cray Linux Environment (CLE)

DESCRIPTION
       Huge pages are virtual memory pages which are bigger than the default base page size of 4Kbytes. Huge pages can improve
       memory performance for common access patterns on large data sets. Huge pages also increase the maximum size of data and
       text in a program accessible by the high speed network. Access to huge pages is provided through a virtual file system
       called hugetlbfs. Every file on this file system is backed by huge pages and is directly accessed with mmap() or read().

       The libhugetlbfs library allows an application to use huge pages more easily than it could by directly accessing the
       hugetlbfs filesystem. A user may use libhugetlbfs to back application text and data segments.

       For definitions of terms used in this man page, see Terms.

   Module Support
       Module files set the necessary link options and run time environment variables to facilitate the usage of the huge page
       size indicated by the module name.

       Gemini systems: craype-hugepages128K, craype-hugepages512K, craype-hugepages2M, craype-hugepages8M, craype-hugepages16M,
       craype-hugepages64M.

       Aries systems: craype-hugepages2M, craype-hugepages4M, craype-hugepages8M, craype-hugepages16M, craype-hugepages32M,
       craype-hugepages64M, craype-hugepages128M, craype-hugepages256M , craype-hugepages512M, craype-hugepages1G, and craype-
       hugepages2G.

       To compile a Unified Parallel C application that uses 2 M huge pages:

	 module load PrgEnv-cray
	 module load craype-hugepages2M
	 cc -h upc -c array_upc.c
	 cc -h upc -o array_upc.x array_upc.o

       To see the link options and run time environment variables set by these modules:

	 module show module_name

       Note that the value of HUGETLB_DEFAULT_PAGE_SIZE varies between craype-hugepages modules. Also note that the name of the
       HUGETLB<size>_POST_LINK_OPTS variable varies between modules, but it's value is the same.

	 setenv HUGETLB_DEFAULT_PAGE_SIZE <size>
	 setenv HUGETLB_MORECORE yes
	 setenv HUGETLB_ELFMAP W
	 setenv HUGETLB_FORCE_ELFMAP yes+
	 setenv HUGETLB<size>_POST_LINK_OPTS "-Wl,\
	 --whole-archive,-lhugetlbfs,--no-whole-archive -Wl,-Ttext-segment=address,-zmax-page-size=size"

       The HUGETLB<size>_POST_LINK_OPTS value is relevant to the creation of the executable, while the others are run time
       environment variables. A user may choose to run an application with a different craype-hugepages module than was used at
       compile and link time. To make most efficient use of available memory, use the smallest huge page size necessary for the
       application.

       The link options -Wl,-Ttext-segment=address,-zmax-page-size=size enforce the alignment and starting addresses of segments
       so that there are separate read-execute (text) and read-write (data and bss) segments for all pages sizes up to the
       maximum of 64M for Gemini and 512M for Aries. This causes libhugetlbfs to avoid overlapping read-execute text with read-
       write data/bss on huge pages, which would cause a segment to be both writable and executable.

	      Note:  The current versions of all the hugepages modules use a 512M alignment and max-page-size so that a
	      statically linked executable may run using a variety of HUGETLB_DEFAULT_PAGE_SIZEs without having to relink;
	      however, this may not be appropriate for certain situations. Specifically, suppose the statically linked
	      application allocates a large amount of static data (greater than 2GiB) in the form of initialized arrays and the
	      32M hugepage module sets -Ttext-segment=0x20000000,-zmax-page-size=0x20000000 (512M alignment). The combined
	      static memory requirement (text+data), plus the memory padding that is added by the linker for 512M alignment, may
	      cause relocation addresses to exceed 4GiB. If this occurs, the user will see "relocation truncated to fit" errors.
	      To remedy this, select the smallest craype-hugepages module needed by the job, and then reset the alignment by
	      resetting the HUGETLB<size>_POST_LINK_OPTS environment variable before linking. For example, if an 8M page size is
	      sufficiently large for the application, load the craype-hugepages8M module and then set the text-segment and max-
	      page-size to 8MB before compiling and linking:

		module load craype-hugepages8M
		setenv HUGETLB8M_POST_LINK_OPTS "-Wl,--whole-archive,-lhugetlbfs,--no-whole-archive \
		-Wl,-Ttext-segment=0x800000,-zmax-page-size=0x800000"

		--------------------------------------------------------------
		Page Size  text-segment/max-page-size settings
		--------------------------------------------------------------
		2M	   -Ttext-segment=0x200000,-zmax-page-size=0x200000
		4M	   -Ttext-segment=0x400000,-zmax-page-size=0x400000
		8M	   -Ttext-segment=0x800000,-zmax-page-size=0x800000
		16M	   -Ttext-segment=0x1000000,-zmax-page-size=0x1000000
		--------------------------------------------------------------

	      Note:  The run time environment variables set by these modules are relevant on compute nodes, not on service
	      nodes. If the user is running the application on a service node instead of a compute node, they should unload the
	      hugepage module before execution.

   When to Use Huge Pages
       ·  For SHMEM applications, map the static data and/or private heap onto huge pages.

       ·  For applications written in Unified Parallel C, Coarray Fortran, and other languages based on the PGAS programming
	  model, map the static data and/or private heap onto huge pages.

       ·  For MPI applications, map the static data and/or heap onto huge pages.

       ·  For an application which uses shared memory, which needs to be concurrently registered with the high speed network
	  drivers for remote communication.

       ·  For an application doing heavy I/O.

       ·  To improve memory performance for common access patterns on large data sets.

   When to Avoid Using Huge Pages
       Applications sometimes consist of many steering programs in addition to the core application. Applying huge page behavior
       to all processes would not provide any benefit and would consume huge pages that would otherwise benefit the core
       application. See HUGETLB_RESTRICT_EXE described in ENVIRONMENT VARIABLES.

ENVIRONMENT VARIABLES
       The following variables affect huge pages:

       XT_SYMMETRIC_HEAP_SIZE
		 The symmetric heap always uses huge pages, regardless of whether or not a hugepage module is loaded.

		 For PGAS applications using UPC or Coarray Fortran, if XT_SYMMETRIC_HEAP_SIZE is not set, the default symmetric
		 heap per PE for a PGAS application is 64M. Therefore, if a Coarray Fortran application requires 1000M per PE
		 and the user does not set XT_SYMMETRIC_HEAP_SIZE, one of the coarray allocate statements will fail to find
		 enough memory. The symmetric heap is reserved at program launch and its size does not change.

		 For PGAS applications using SHMEM, either XT_SYMMETRIC_HEAP_SIZE or SMA_SYMMETRIC_SIZE should be used to set
		 the size of the symmetric heap. Cray XC series systems support a growable symmetric heap, so if
		 XT_SYMMETRIC_HEAP_SIZE or SMA_SYMMETRIC_SIZE is not set, the symmetric heap grows dynamically as needed to a
		 maximum of 2GB per PE. (Cray XE and Cray XK series systems do not support growable symmetric heap and have no
		 default symmetric heap value.)

		 The aprun -m option does not change the size of the symmetric heap allocated by UPC or Fortran applications
		 upon startup. The -m option refers to the total amount of memory available to a PE, which includes all memory
		 and not just the symmetric heap. Use -m option only if necessary.

       The following variables affect libhugetlbfs:

       HUGETLB_DEFAULT_PAGE_SIZE
		 Override the system default huge page size for all uses except the hugetlbfs-backed symmetric heap used by
		 SHMEM and PGAS programming models. The default huge page size is 2M.

		 Additionally supported on Gemini systems: 128K, 512K , 8M, 16M, 64M.

		 Additionally supported on Aries systems: 4M, 8M, 16M, 32M, 64M, 128M, 256M, 512M, 1GB, 2GB.

       HUGETLB_ELFMAP
		 Set to W to map the read-write sections (writable static data, bss) onto huge pages.

		 Set to R to map the read-execute segment (text, read-only static data) onto huge pages.

		 Set to RW to map both onto huge pages.

       HUGETLB_FORCE_ELFMAP
		 If set to yes, and LD_PRELOAD contains libhugetlbfs.so, then libhugetlbfs will load all parts of the text, data
		 and bss that fall on huge page boundaries onto huge pages. The parts of the text and data and bss sections that
		 do not fall into whole huge pages (e.g. the "edges") are left on 4K pages.

		 If set to yes+ (Cray extension), then all of the text and/or data and bss (per direction of HUGETLB_ELFMAP)
		 will be mapped onto huge pages, including the "edges". Note that the Cray extension works for both static and
		 dynamic executables and does not depend on LD_PRELOAD having libhugetlbfs.so in it.

		 If there is an overlap of the read-execute and the read-write sections, then a new mapping for the overlap will
		 be made with combined permissions (i.e. RWX). Using the link option specified in the craype-hugepages modules
		 avoids this overlap.

       HUGETLB_MORECORE
		 Set to yes to map the heap (also relates to the private heap in SHMEM applications) onto huge pages. Enables
		 malloc() to use memory backed by huge pages automatically.

       HUGETLB_RESTRICT_EXE=exe1[:exe2:exe3:...]
		 Selectively enables libhugetlbfs to map only the named executables onto huge pages. The executables are named
		 by the last component of the pathname; use a colon to separate the names of multiple executables. For example,
		 if your executable is /lus/home/user/bin/mytest.x, specify:

		 HUGETLB_RESTRICT_EXE=mytest.x

       HUGETLB_VERBOSE
		 The range of the value is from 0 to 99. Setting to a nonzero number causes libhugetlbfs to print out
		 informational messages. A value of 99 prints out all available information.

NOTES
   Gemini NIC
       There are two hardware mechanisms used by the Gemini NIC to translate virtual to physical memory references on the Cray
       XE and Cray XK systems. GNI and DMAPP are low level libraries which provide communication services to user level software
       and implement a logically shared, distributed memory programming model.

       ·  GART is a feature of many AMD64 processors that allows the system to access virtually contiguous user pages that are
	  backed by non-contiguous physical pages. The GART aggregates the Linux standard 4 Kbyte pages into larger virtually
	  contiguous memory regions. The contiguous pages exist in a portion of the physical address space known as the Graphics
	  Aperture. The GART's graphics aperture size is 2GiB. Therefore, the total memory which can be referenced through GART
	  is limited to 2GiB per compute node.

       ·  The Memory Relocation Table (MRT) on the Gemini NIC maps the memory references contained in incoming network packets
	  to physical memory on the local node. Memory references through the MRT map to a much larger address range than they
	  do through the GART. Each NIC has its own MRT. MRT page sizes range from 128 K to 1 Gbyte, but all the entries on a
	  given node must have the same page size. The MRT entries are created by kGNI in response to requests from the
	  application, usually the uGNI library. There are 16K MRT entries. The default MRT page size is 2Mbytes, which maps to
	  32Gbytes (16K*2M). HUGETLB_DEFAULT_PAGE_SIZE sets the MRT page size.

       Depending on the size of the allocated memory region and other default behavior, the memory registration function (of
       GNI/DMAPP) asks the kernel to create either GART entries on the AMD processor, or, in the case of huge pages, create
       entries in the Memory Relocation Table (MRT) on the NIC, to span the allocated memory region. User virtual memory that is
       to be read or written across nodes, generally must first be registered on the node; its physical location(s) and
       extent(s) loaded into the Gemini Memory Descriptor Table (MDD) and either the Opteron GART or the Gemini MRT.

       Required GART Address Translation: Lustre I/O uses the GART. The Lustre Network Driver (LND) uses 1 Mbyte buffers,
       constructed out of smaller pages using the GART. DVS uses the GART.

       Required MRT Address Translation:  User virtual memory mapped by huge pages (via a hugetlbfs file system) will be
       registered in the MRT.

       DMAPP mmaps the symmetric heap directly, regardless of its size, to the hugetlbfs file system if it is mounted, which it
       normally is on Cray XE systems. So, any application using DMAPP (e.g. SHMEM, PGAS programming models) will use MRT for
       memory references within the symmetric heap. The symmetric heap always uses huge pages, regardless of whether a hugepages
       module is loaded. Note that the libhugetlbfs library is not used in this case. The value of HUGETLB_DEFAULT_PAGE_SIZE
       determines the page size for the symmetric heap but the other HUGETLB environment variables have no effect.

       When an application's memory requirements, (specifically memory which is mapped through the HSN), exceeds the GART
       aperture size (2GiB) on a single node, the application must be linked with the libhugetlbfs library, to use the larger
       address range available with huge pages.

       Default Behavior If Not Using craype-hugepages Modules: If there is no craype-hugepages module loaded and if none of the
       HUGETLB environment variables are set, by default the symmetric heap (in the case of SHMEM or PGAS programming models) is
       mapped onto huge pages but most other memory is mapped onto base pages which uses GART. Considering the 2GiB GART per
       node limit which is shared between application PEs on a node, Lustre and DVS, it is advisable to map the static data
       section and private heap onto huge pages. This can be selectively changed by using the proper link options and setting
       the environment variables HUGETLB_ELFMAP=W, and HUGETLB_MORECORE=yes.

   Aries NIC
       In Cray systems which have the Aries NIC, the Aries IO Memory Management Unit (IOMMU) provides hardware support for
       memory protection and address translation. The Aries IOMMU uses an entirely different memory translation mechanism than
       Gemini uses:

       ·  The IOMMU is divided into 16 translation context registers (TCRs). Each translation context (TC) supports a single
	  page size. The TCRs can independently address different page sizes and present that to the network as a contiguous
	  memory domain. The TCR entries are used to set and clear the page table entries (PTEs) used by GNI. PTE entries are
	  cached in Aries NIC memory in a page table. Up to 512 PTEs can be used by applications. 512MiB (largest hugepage size)
	  x 512 PTEs = 256GiB of addressable memory per node on Aries systems.

   Other Notes on Memory Usage
       Huge pages benefit applications which have a huge working set size (hundreds of Mbytes or many Gbytes and above) since
       this would require many virtual to physical address translations if using the default 4K pages. By using huge pages, the
       number of required address translations is decreased which benefits application performance by removing the wait time to
       fill up the TLB caches with translation data. Larger pages increase memory reach but may also exhaust available memory
       quicker. Thus, the optimal page size may vary from application to application.

       With hugepages, an application is still limited by the total memory on the node. Also memory fragmentation can decrease
       available memory. See ISSUES.

       The /proc/meminfo file does not give a complete picture of huge page usage and is deprecated for this purpose.

   Running Independent Software Vendor (ISV) Applications
       To enable a dynamically linked executable, that was not originally linked with libhugetlbfs, to use Cray's libhugetlbfs
       library at runtime, you must first load a hugepages module and set the environment variable LD_PRELOAD so that it
       contains the libhugetlbfs pathname:

	 module load craype-hugepages2M
	 export LD_PRELOAD=/usr/lib64/libhugetlbfs.so

       If an ISV application is already using LD_PRELOAD to set dynamic library dependencies, then use a white-space separated
       list. For example:

	 export LD_PRELOAD="/usr/lib64/libhugetlbfs.so /directory_name/lib.so"

       To confirm the usage of hugepages, one may set HUGETLB_VERBOSE to 3 or higher:

	 export HUGETLB_VERBOSE=3

       Statically linked executables can only use Cray's libhugetlbfs if they are linked with it. Statically linked executables
       do not process LD_PRELOAD; therefore statically linked ISVs must be relinked with libhugetlbfs. See Module Support for
       compiling and linking.

       The nm and ldd commands are useful for determining the contents and dynamic dependencies of executables.

   Selective Mapping
       ISV applications sometimes consist of scripts which run several executables, only some of which need to run with huge
       pages. The environment variable HUGETLB_RESTRICT_EXE enables the libhugetlbfs library to selectively map only the named
       executables onto huge pages.

   Terms
       Text Segment - contains the actual instructions to be executed.

       Data Segment - contains the program's data part, which is further divided into data, bss, and heap sections.

       ·  Data- global, static initialized data.

       ·  BSS - global, static uninitialized data.

       ·  Heap - dynamically allocated memory.

       Stack - used for local variables, stack frames.

       Symmetric Heap - contains dynamically allocated memory for a PE, which is kept in sync by the programming model (e.g.
       SHMEM) with that of another PE. See intro_shmem(3) man page for additional information. The private heap contains
       dynamically allocated memory which is specific to a PE.

       GART - Graphics Aperture Relocation Table

       HSN - High Speed Network

       IOMMU - High I/O Memory Management Unit

       ISV - Independent Software Vendor

       MRT - Memory Relocation Table

       TLB - Translation Look Aside Buffer is the memory management hardware uses to translate virtual addresses into physical
       addresses.

ISSUES
       Huge pages are a per-node resource, not a per-job resource, nor a per-process resource. There is no guarantee that the
       requested number of huge pages will be available on the compute nodes. If the memory pool becomes fragmented, which it
       can over time, the number of free blocks that are equal to or larger than the huge page size can decrease below the
       number needed to service the request, even though there may be enough free memory in the pool when summing free blocks of
       all sizes. For this reason, use huge page sizes no larger than needed.

       If the heap is mapped to huge pages (by setting HUGETLB_MORECORE to yes) and if a malloc call requires that the heap be
       extended, and if there are not enough free blocks in the memory pool large enough to support the required number of huge
       pages, libhugetlbfs will issue the following WARNING message and then glibc will fall back to allocating base pages.

	 libhugetlbfs [nid000xx:xxxxx]: WARNING: New heap segment map at
	 0x10000000 failed: Cannot allocate memory

       Since this is a warning, jobs are able to continue running after this message occurs. But because the allocated base
       pages use GART entries, and as described in the NOTES section, and there are a limited number of GART entries, future
       memory requests may fail altogether due to lack of available GART entries.

       With craype-hugepages modules loaded, it is no longer necessary to include -lhugetlbfs on the link line. Doing so will
       result in messages indicating multiple definitions, such as:

	 //usr/lib64/libhugetlbfs.a(elflink.o): In function
	 `__libhugetlbfs_do_remap_segments':

	 /usr/src/packages/BUILD/cray-libhugetlbfs-2.11/elflink.c:2012:
	 multiple definition of `__libhugetlbfs_do_remap_segments'

	 //usr/lib64/libhugetlbfs.a(elflink.o):/usr/src/packages/BUILD/
	 cray-libhugetlbfs-2.11/elflink.c:2012: first defined here

       Adjust makefiles or build scripts accordingly.

SEE ALSO
       hugeadm(8), cc(1), CC(1), ftn(1), aprun(1), intro_mpi(3), intro_shmem(3), libhugetlbfs(7)

       /usr/share/doc/libhugetlbfs/HOWTO

							   03-08-2019					      intro_hugepages(1)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Additional Huge Pages
  2022-07-22 19:20   ` Albert, Des
@ 2022-07-22 19:39     ` Matthew Wilcox
  2022-07-22 19:53       ` Albert, Des
  2022-07-27 22:48       ` Albert, Des
  0 siblings, 2 replies; 8+ messages in thread
From: Matthew Wilcox @ 2022-07-22 19:39 UTC (permalink / raw)
  To: Albert, Des; +Cc: Mike Kravetz, songmuchun, linux-mm

On Fri, Jul 22, 2022 at 07:20:51PM +0000, Albert, Des wrote:
> This is the first time I have heard of the folio abstraction as the future for memory management. When you mention that future hugetbls work will be based on that concept, it seems unlikely that there would be interest in code that is not consistent with those developments. I also doubt that there would be a justification to 'update' the code to be consistent with future kernel developments.
> 
> I am therefore forming the impression that this idea may not be of interest to the Linux kernel community, however, I do not the detailed technical depth of the development team.
> 
> Do you have some more information about this folio abstraction plan ?

Hi Des!  I'm the lead on the folio abstraction plan, so hopefully I can
be of some help.

Folios, like your Cray Hugepages, broaden the supported page sizes.
They were originally conceived for relatively small page sizes (eg
16kB-256kB) and have been implemented so far only for the XFS filesystem.
Other filesystems are in progress.

This is the first hint we've had that people are interested in folio
sizes above 2MB.  I think the folio work should make supporting this
Cray requirement much easier.  It's certainly good to know that this
is interesting before we do too much work on converting the existing
hugetlb code over to folios.  Are you able to direct any developers to
help us with this?  We can definitely work together on this project;
we've had a similar collaboration running for a few years now on the
Transparent Huge Page side of things.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Additional Huge Pages
  2022-07-22 19:39     ` Matthew Wilcox
@ 2022-07-22 19:53       ` Albert, Des
  2022-07-27 22:48       ` Albert, Des
  1 sibling, 0 replies; 8+ messages in thread
From: Albert, Des @ 2022-07-22 19:53 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Mike Kravetz, songmuchun, linux-mm

Hi Matthew

Thanks for your prompt response.  I had already jumped to the conclusion that the folio abstraction idea would effectively negate the need for additional huge pages. It seems that I was wrong.

I can definitely try to connect you with our developers, however, I first wanted to assess whether there was real interest from the relevant kernel.org maintainers. I don't want our technical people spending time on work that will not be of overall benefit. In my opinion, it would be good to see this code as a standard Linux feature so that our developers can devote their attention to other work. Our focus is HPC so I had assumed that the feature would have limited interest in the broader Linux community. 

As previously stated, I am cannot find much evidence the current benefits from huge pages above 2 MB but I am certainly not qualified to discuss the low level details.

Des

-----Original Message-----
From: Matthew Wilcox <willy@infradead.org> 
Sent: Friday, July 22, 2022 12:39 PM
To: Albert, Des <des.albert@hpe.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>; songmuchun@bytedance.com; linux-mm@kvack.org
Subject: Re: Additional Huge Pages

On Fri, Jul 22, 2022 at 07:20:51PM +0000, Albert, Des wrote:
> This is the first time I have heard of the folio abstraction as the future for memory management. When you mention that future hugetbls work will be based on that concept, it seems unlikely that there would be interest in code that is not consistent with those developments. I also doubt that there would be a justification to 'update' the code to be consistent with future kernel developments.
> 
> I am therefore forming the impression that this idea may not be of interest to the Linux kernel community, however, I do not the detailed technical depth of the development team.
> 
> Do you have some more information about this folio abstraction plan ?

Hi Des!  I'm the lead on the folio abstraction plan, so hopefully I can be of some help.

Folios, like your Cray Hugepages, broaden the supported page sizes.
They were originally conceived for relatively small page sizes (eg
16kB-256kB) and have been implemented so far only for the XFS filesystem.
Other filesystems are in progress.

This is the first hint we've had that people are interested in folio sizes above 2MB.  I think the folio work should make supporting this Cray requirement much easier.  It's certainly good to know that this is interesting before we do too much work on converting the existing hugetlb code over to folios.  Are you able to direct any developers to help us with this?  We can definitely work together on this project; we've had a similar collaboration running for a few years now on the Transparent Huge Page side of things.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Additional Huge Pages
  2022-07-22 17:20 Additional Huge Pages Albert, Des
  2022-07-22 18:12 ` Mike Kravetz
@ 2022-07-25 15:28 ` Rongwei Wang
  1 sibling, 0 replies; 8+ messages in thread
From: Rongwei Wang @ 2022-07-25 15:28 UTC (permalink / raw)
  To: Albert, Des, mike.kravetz, songmuchun, linux-mm



On 7/23/22 1:20 AM, Albert, Des wrote:
> Hi
> 
> I am the Product Manager for the HPE Cray Operating System ( formerly Cray Linux Environment )
> 
> One of the features of this product is a component known as additional huge pages. This is kernel code that enables the selection of 'non-standard' huge page sizes.
> For example, the current implementation allows for selection of huge page sizes of 2, 4, 8, 16, 32, 64, 128, 256 and 512 MB as well as 1 and 2 GB.
Hi Des

It seems an interesting feature.
Is it a transparent huge page? And I can imagine huge page size of 2MB, 
but what's a huge page size of 4 MB, 8 MB, etc.? Two consecutive 2MB 
huge page or four consecutive 2MB huge page? I'm interested in the 
implementation of additional huge pages you described. Because I've 
wanted to try it before, but found that the changes to the kernel were 
quite big.

> 
> We are currently evaluating the concept of providing this code to kernel.org. I realize that this would require dedication of technical resources to work with maintainers.
> 
> I would like to know if there is interest in this suggestion. I realize that Transparent Huge Pages may be regarded as a more general approach to this requirement.
> 
> Thanks
> Des
> 
> Dr Des Albert
> HPC Software Product Manager
> +1 360 312 6169
> des.albert@hpe.com<mailto:des.albert@hpe.com>
> 
> [cid:image001.gif@01D89DAF.A3F91F80]  Hewlett Packard Enterprise
> 
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Additional Huge Pages
  2022-07-22 19:39     ` Matthew Wilcox
  2022-07-22 19:53       ` Albert, Des
@ 2022-07-27 22:48       ` Albert, Des
  2022-12-21 23:43         ` Albert, Des
  1 sibling, 1 reply; 8+ messages in thread
From: Albert, Des @ 2022-07-27 22:48 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Mike Kravetz, songmuchun, linux-mm

Hi Matt

I have discussed this topic in more detail with the primary maintainer of this kernel code. He confirmed that the additional huge pages are currently of most value to those who develop software that supports the HPE Slingshot High Speed Network. It is therefore of considerable benefit where networks access large areas of memory with PGAS (Partitioned Global Address Space ) or Symmetrical Hierarchical Memory ( SHMEM )  Because these programming concepts are relatively specific to HPC, it seems likely that it will remain a niche topic.  
I suspect that there is an assumption that offering code to kernel.org will, in some way, reduce the HPE resources required to maintain the code. I doubt that would be possible. In the opinion of the developer, the primary benefit of contributing code to kernel.org comes from ensuring that other changes to the Linux kernel do not adversely affect the kernel code that HPE has contributed and will continue to maintain.  

At this time, the primary maintainer is dedicated to other projects and is therefore not available for more detailed discussions. My primary goal has been to establish contact with the appropriate maintainers at kernel.org and seek responses to the suggestion of offering the code.

Des 

-----Original Message-----
From: Matthew Wilcox <willy@infradead.org> 
Sent: Friday, July 22, 2022 12:39 PM
To: Albert, Des <des.albert@hpe.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>; songmuchun@bytedance.com; linux-mm@kvack.org
Subject: Re: Additional Huge Pages

On Fri, Jul 22, 2022 at 07:20:51PM +0000, Albert, Des wrote:
> This is the first time I have heard of the folio abstraction as the future for memory management. When you mention that future hugetbls work will be based on that concept, it seems unlikely that there would be interest in code that is not consistent with those developments. I also doubt that there would be a justification to 'update' the code to be consistent with future kernel developments.
> 
> I am therefore forming the impression that this idea may not be of interest to the Linux kernel community, however, I do not the detailed technical depth of the development team.
> 
> Do you have some more information about this folio abstraction plan ?

Hi Des!  I'm the lead on the folio abstraction plan, so hopefully I can be of some help.

Folios, like your Cray Hugepages, broaden the supported page sizes.
They were originally conceived for relatively small page sizes (eg
16kB-256kB) and have been implemented so far only for the XFS filesystem.
Other filesystems are in progress.

This is the first hint we've had that people are interested in folio sizes above 2MB.  I think the folio work should make supporting this Cray requirement much easier.  It's certainly good to know that this is interesting before we do too much work on converting the existing hugetlb code over to folios.  Are you able to direct any developers to help us with this?  We can definitely work together on this project; we've had a similar collaboration running for a few years now on the Transparent Huge Page side of things.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Additional Huge Pages
  2022-07-27 22:48       ` Albert, Des
@ 2022-12-21 23:43         ` Albert, Des
  0 siblings, 0 replies; 8+ messages in thread
From: Albert, Des @ 2022-12-21 23:43 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Mike Kravetz, songmuchun, linux-mm

Hi Matt

I apologize for the long break since our last communication. The technical people who need to investigate this topic have been consumed with other priorities. During the last week, some effort was devoted to investigating the additional huge page code use of memory folios. It was determined that our implementation can be updated to use these features without much effort. Initial tests suggest that there is unlikely to be much difference in behaviour other than more efficient use of compound pages. 

Having effectively resolved that question of compatibility, there is the larger topic of whether the HPE code is of interest to kernel.org.
The general feeling at HPE is that the feature is specifically targeted at improving the MPI performance of high speed networks such as HPE slingshot and may therefore not be of general interest to the broader Linux community. 

My reading of your work in this area suggests that you have plenty of other priorities related to file system changes so that there is unlikely to be much interest in the HPE code. Please correct me if my assessment is not correct

Regards
Des

-----Original Message-----
From: Albert, Des 
Sent: Wednesday, July 27, 2022 3:48 PM
To: Matthew Wilcox <willy@infradead.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>; songmuchun@bytedance.com; linux-mm@kvack.org
Subject: RE: Additional Huge Pages

Hi Matt

I have discussed this topic in more detail with the primary maintainer of this kernel code. He confirmed that the additional huge pages are currently of most value to those who develop software that supports the HPE Slingshot High Speed Network. It is therefore of considerable benefit where networks access large areas of memory with PGAS (Partitioned Global Address Space ) or Symmetrical Hierarchical Memory ( SHMEM )  Because these programming concepts are relatively specific to HPC, it seems likely that it will remain a niche topic.  
I suspect that there is an assumption that offering code to kernel.org will, in some way, reduce the HPE resources required to maintain the code. I doubt that would be possible. In the opinion of the developer, the primary benefit of contributing code to kernel.org comes from ensuring that other changes to the Linux kernel do not adversely affect the kernel code that HPE has contributed and will continue to maintain.  

At this time, the primary maintainer is dedicated to other projects and is therefore not available for more detailed discussions. My primary goal has been to establish contact with the appropriate maintainers at kernel.org and seek responses to the suggestion of offering the code.

Des 

-----Original Message-----
From: Matthew Wilcox <willy@infradead.org> 
Sent: Friday, July 22, 2022 12:39 PM
To: Albert, Des <des.albert@hpe.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>; songmuchun@bytedance.com; linux-mm@kvack.org
Subject: Re: Additional Huge Pages

On Fri, Jul 22, 2022 at 07:20:51PM +0000, Albert, Des wrote:
> This is the first time I have heard of the folio abstraction as the future for memory management. When you mention that future hugetbls work will be based on that concept, it seems unlikely that there would be interest in code that is not consistent with those developments. I also doubt that there would be a justification to 'update' the code to be consistent with future kernel developments.
> 
> I am therefore forming the impression that this idea may not be of interest to the Linux kernel community, however, I do not the detailed technical depth of the development team.
> 
> Do you have some more information about this folio abstraction plan ?

Hi Des!  I'm the lead on the folio abstraction plan, so hopefully I can be of some help.

Folios, like your Cray Hugepages, broaden the supported page sizes.
They were originally conceived for relatively small page sizes (eg
16kB-256kB) and have been implemented so far only for the XFS filesystem.
Other filesystems are in progress.

This is the first hint we've had that people are interested in folio sizes above 2MB.  I think the folio work should make supporting this Cray requirement much easier.  It's certainly good to know that this is interesting before we do too much work on converting the existing hugetlb code over to folios.  Are you able to direct any developers to help us with this?  We can definitely work together on this project; we've had a similar collaboration running for a few years now on the Transparent Huge Page side of things.



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-12-21 23:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-22 17:20 Additional Huge Pages Albert, Des
2022-07-22 18:12 ` Mike Kravetz
2022-07-22 19:20   ` Albert, Des
2022-07-22 19:39     ` Matthew Wilcox
2022-07-22 19:53       ` Albert, Des
2022-07-27 22:48       ` Albert, Des
2022-12-21 23:43         ` Albert, Des
2022-07-25 15:28 ` Rongwei Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.