From: Reinette Chatre <reinette.chatre@intel.com> To: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com Cc: gavin.hindman@intel.com, vikas.shivappa@linux.intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre <reinette.chatre@intel.com>, linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>, Mike Kravetz <mike.kravetz@oracle.com>, Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz> Subject: [RFC PATCH V2 21/22] mm/hugetlb: Enable large allocations through gigantic page API Date: Tue, 13 Feb 2018 07:47:05 -0800 [thread overview] Message-ID: <cf48eb8469111b3dc5fa33735ff10965c4396a99.1518443616.git.reinette.chatre@intel.com> (raw) In-Reply-To: <cover.1518443616.git.reinette.chatre@intel.com> In-Reply-To: <cover.1518443616.git.reinette.chatre@intel.com> Memory allocation within the kernel as supported by the SLAB allocators is limited by the maximum allocatable page order. With the default maximum page order of 11 it is not possible for the SLAB allocators to allocate more than 4MB. Large contiguous allocations are currently possible within the kernel through the gigantic page support. The creation of which is currently directed from userspace. Expose the gigantic page support within the kernel to enable memory allocations that cannot be fulfilled by the SLAB allocators. Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Cc: linux-mm@kvack.org Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Vlastimil Babka <vbabka@suse.cz> --- include/linux/hugetlb.h | 2 ++ mm/hugetlb.c | 10 ++++------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 82a25880714a..8f2125dc8a86 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -349,6 +349,8 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask); int huge_add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t idx); +struct page *alloc_gigantic_page(int nid, unsigned int order, gfp_t gfp_mask); +void free_gigantic_page(struct page *page, unsigned int order); /* arch callback */ int __init __alloc_bootmem_huge_page(struct hstate *h); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9a334f5fb730..f3f5e4ef3144 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1060,7 +1060,7 @@ static void destroy_compound_gigantic_page(struct page *page, __ClearPageHead(page); } -static void free_gigantic_page(struct page *page, unsigned int order) +void free_gigantic_page(struct page *page, unsigned int order) { free_contig_range(page_to_pfn(page), 1 << order); } @@ -1108,17 +1108,15 @@ static bool zone_spans_last_pfn(const struct zone *zone, return zone_spans_pfn(zone, last_pfn); } -static struct page *alloc_gigantic_page(int nid, struct hstate *h) +struct page *alloc_gigantic_page(int nid, unsigned int order, gfp_t gfp_mask) { - unsigned int order = huge_page_order(h); unsigned long nr_pages = 1 << order; unsigned long ret, pfn, flags; struct zonelist *zonelist; struct zone *zone; struct zoneref *z; - gfp_t gfp_mask; - gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; + gfp_mask = gfp_mask | __GFP_THISNODE; zonelist = node_zonelist(nid, gfp_mask); for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), NULL) { spin_lock_irqsave(&zone->lock, flags); @@ -1155,7 +1153,7 @@ static struct page *alloc_fresh_gigantic_page_node(struct hstate *h, int nid) { struct page *page; - page = alloc_gigantic_page(nid, h); + page = alloc_gigantic_page(nid, huge_page_order(h), htlb_alloc_mask(h)); if (page) { prep_compound_gigantic_page(page, huge_page_order(h)); prep_new_huge_page(h, page, nid); -- 2.13.6
WARNING: multiple messages have this Message-ID (diff)
From: Reinette Chatre <reinette.chatre@intel.com> To: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com Cc: gavin.hindman@intel.com, vikas.shivappa@linux.intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre <reinette.chatre@intel.com>, linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>, Mike Kravetz <mike.kravetz@oracle.com>, Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz> Subject: [RFC PATCH V2 21/22] mm/hugetlb: Enable large allocations through gigantic page API Date: Tue, 13 Feb 2018 07:47:05 -0800 [thread overview] Message-ID: <cf48eb8469111b3dc5fa33735ff10965c4396a99.1518443616.git.reinette.chatre@intel.com> (raw) In-Reply-To: <cover.1518443616.git.reinette.chatre@intel.com> In-Reply-To: <cover.1518443616.git.reinette.chatre@intel.com> Memory allocation within the kernel as supported by the SLAB allocators is limited by the maximum allocatable page order. With the default maximum page order of 11 it is not possible for the SLAB allocators to allocate more than 4MB. Large contiguous allocations are currently possible within the kernel through the gigantic page support. The creation of which is currently directed from userspace. Expose the gigantic page support within the kernel to enable memory allocations that cannot be fulfilled by the SLAB allocators. Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Cc: linux-mm@kvack.org Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Vlastimil Babka <vbabka@suse.cz> --- include/linux/hugetlb.h | 2 ++ mm/hugetlb.c | 10 ++++------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 82a25880714a..8f2125dc8a86 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -349,6 +349,8 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask); int huge_add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t idx); +struct page *alloc_gigantic_page(int nid, unsigned int order, gfp_t gfp_mask); +void free_gigantic_page(struct page *page, unsigned int order); /* arch callback */ int __init __alloc_bootmem_huge_page(struct hstate *h); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9a334f5fb730..f3f5e4ef3144 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1060,7 +1060,7 @@ static void destroy_compound_gigantic_page(struct page *page, __ClearPageHead(page); } -static void free_gigantic_page(struct page *page, unsigned int order) +void free_gigantic_page(struct page *page, unsigned int order) { free_contig_range(page_to_pfn(page), 1 << order); } @@ -1108,17 +1108,15 @@ static bool zone_spans_last_pfn(const struct zone *zone, return zone_spans_pfn(zone, last_pfn); } -static struct page *alloc_gigantic_page(int nid, struct hstate *h) +struct page *alloc_gigantic_page(int nid, unsigned int order, gfp_t gfp_mask) { - unsigned int order = huge_page_order(h); unsigned long nr_pages = 1 << order; unsigned long ret, pfn, flags; struct zonelist *zonelist; struct zone *zone; struct zoneref *z; - gfp_t gfp_mask; - gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; + gfp_mask = gfp_mask | __GFP_THISNODE; zonelist = node_zonelist(nid, gfp_mask); for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), NULL) { spin_lock_irqsave(&zone->lock, flags); @@ -1155,7 +1153,7 @@ static struct page *alloc_fresh_gigantic_page_node(struct hstate *h, int nid) { struct page *page; - page = alloc_gigantic_page(nid, h); + page = alloc_gigantic_page(nid, huge_page_order(h), htlb_alloc_mask(h)); if (page) { prep_compound_gigantic_page(page, huge_page_order(h)); prep_new_huge_page(h, page, nid); -- 2.13.6 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2018-02-13 23:50 UTC|newest] Thread overview: 71+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-02-13 15:46 [RFC PATCH V2 00/22] Intel(R) Resource Director Technology Cache Pseudo-Locking enabling Reinette Chatre 2018-02-13 15:46 ` Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 01/22] x86/intel_rdt: Documentation for Cache Pseudo-Locking Reinette Chatre 2018-02-19 20:35 ` Thomas Gleixner 2018-02-19 22:15 ` Reinette Chatre 2018-02-19 22:19 ` Thomas Gleixner 2018-02-19 22:24 ` Reinette Chatre 2018-02-19 21:27 ` Randy Dunlap 2018-02-19 22:21 ` Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 02/22] x86/intel_rdt: Make useful functions available internally Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 03/22] x86/intel_rdt: Introduce hooks to create pseudo-locking files Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 04/22] x86/intel_rdt: Introduce test to determine if closid is in use Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 05/22] x86/intel_rdt: Print more accurate pseudo-locking availability Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 06/22] x86/intel_rdt: Create pseudo-locked regions Reinette Chatre 2018-02-19 20:57 ` Thomas Gleixner 2018-02-19 23:02 ` Reinette Chatre 2018-02-19 23:16 ` Thomas Gleixner 2018-02-20 3:21 ` Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 07/22] x86/intel_rdt: Connect pseudo-locking directory to operations Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 08/22] x86/intel_rdt: Introduce pseudo-locking resctrl files Reinette Chatre 2018-02-19 21:01 ` Thomas Gleixner 2018-02-13 15:46 ` [RFC PATCH V2 09/22] x86/intel_rdt: Discover supported platforms via prefetch disable bits Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 10/22] x86/intel_rdt: Disable pseudo-locking if CDP enabled Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 11/22] x86/intel_rdt: Associate pseudo-locked regions with its domain Reinette Chatre 2018-02-19 21:19 ` Thomas Gleixner 2018-02-19 23:00 ` Reinette Chatre 2018-02-19 23:19 ` Thomas Gleixner 2018-02-20 3:17 ` Reinette Chatre 2018-02-20 10:00 ` Thomas Gleixner 2018-02-20 16:02 ` Reinette Chatre 2018-02-20 17:18 ` Thomas Gleixner 2018-02-13 15:46 ` [RFC PATCH V2 12/22] x86/intel_rdt: Support CBM checking from value and character buffer Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 13/22] x86/intel_rdt: Support schemata write - pseudo-locking core Reinette Chatre 2018-02-20 17:15 ` Thomas Gleixner 2018-02-20 18:47 ` Reinette Chatre 2018-02-20 23:21 ` Thomas Gleixner 2018-02-21 1:58 ` Mike Kravetz 2018-02-21 6:10 ` Reinette Chatre 2018-02-21 8:34 ` Thomas Gleixner 2018-02-21 5:58 ` Reinette Chatre 2018-02-27 0:34 ` Reinette Chatre 2018-02-27 10:36 ` Thomas Gleixner 2018-02-27 15:38 ` Thomas Gleixner 2018-02-27 19:52 ` Reinette Chatre 2018-02-27 21:33 ` Reinette Chatre 2018-02-28 18:39 ` Thomas Gleixner 2018-02-28 19:17 ` Reinette Chatre 2018-02-28 19:40 ` Thomas Gleixner 2018-02-27 21:01 ` Reinette Chatre 2018-02-28 17:57 ` Thomas Gleixner 2018-02-28 17:59 ` Thomas Gleixner 2018-02-28 18:34 ` Reinette Chatre 2018-02-28 18:42 ` Thomas Gleixner 2018-02-13 15:46 ` [RFC PATCH V2 14/22] x86/intel_rdt: Enable testing for pseudo-locked region Reinette Chatre 2018-02-13 15:46 ` [RFC PATCH V2 15/22] x86/intel_rdt: Prevent new allocations from pseudo-locked regions Reinette Chatre 2018-02-13 15:47 ` [RFC PATCH V2 16/22] x86/intel_rdt: Create debugfs files for pseudo-locking testing Reinette Chatre 2018-02-13 15:47 ` [RFC PATCH V2 17/22] x86/intel_rdt: Create character device exposing pseudo-locked region Reinette Chatre 2018-02-13 15:47 ` [RFC PATCH V2 18/22] x86/intel_rdt: More precise L2 hit/miss measurements Reinette Chatre 2018-02-13 15:47 ` [RFC PATCH V2 19/22] x86/intel_rdt: Support L3 cache performance event of Broadwell Reinette Chatre 2018-02-13 15:47 ` [RFC PATCH V2 20/22] x86/intel_rdt: Limit C-states dynamically when pseudo-locking active Reinette Chatre 2018-02-13 15:47 ` Reinette Chatre [this message] 2018-02-13 15:47 ` [RFC PATCH V2 21/22] mm/hugetlb: Enable large allocations through gigantic page API Reinette Chatre 2018-02-13 15:47 ` [RFC PATCH V2 22/22] x86/intel_rdt: Support contiguous memory of all sizes Reinette Chatre 2018-02-14 18:12 ` [RFC PATCH V2 00/22] Intel(R) Resource Director Technology Cache Pseudo-Locking enabling Mike Kravetz 2018-02-14 18:12 ` Mike Kravetz 2018-02-14 18:31 ` Reinette Chatre 2018-02-14 18:31 ` Reinette Chatre 2018-02-15 20:39 ` Reinette Chatre 2018-02-15 20:39 ` Reinette Chatre 2018-02-15 21:10 ` Mike Kravetz 2018-02-15 21:10 ` Mike Kravetz
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=cf48eb8469111b3dc5fa33735ff10965c4396a99.1518443616.git.reinette.chatre@intel.com \ --to=reinette.chatre@intel.com \ --cc=akpm@linux-foundation.org \ --cc=dave.hansen@intel.com \ --cc=fenghua.yu@intel.com \ --cc=gavin.hindman@intel.com \ --cc=hpa@zytor.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@suse.com \ --cc=mike.kravetz@oracle.com \ --cc=mingo@redhat.com \ --cc=tglx@linutronix.de \ --cc=tony.luck@intel.com \ --cc=vbabka@suse.cz \ --cc=vikas.shivappa@linux.intel.com \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.