From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: mhocko@suse.com, vbabka@suse.cz, mgorman@suse.de,
minchan@kernel.org, aneesh.kumar@linux.vnet.ibm.com,
bsingharora@gmail.com, srikar@linux.vnet.ibm.com,
haren@linux.vnet.ibm.com, jglisse@redhat.com,
dave.hansen@intel.com, dan.j.williams@intel.com
Subject: [PATCH V3 2/4] mm: Enable HugeTLB allocation isolation for CDM nodes
Date: Wed, 15 Feb 2017 17:37:24 +0530 [thread overview]
Message-ID: <20170215120726.9011-3-khandual@linux.vnet.ibm.com> (raw)
In-Reply-To: <20170215120726.9011-1-khandual@linux.vnet.ibm.com>
HugeTLB allocation/release/accounting currently spans across all the nodes
under N_MEMORY node mask. Coherent memory nodes should not be part of these
allocations. So use system_mem_nodemask() call to fetch system RAM only
nodes on the platform which can then be used for HugeTLB allocation purpose
instead of N_MEMORY node mask. This isolates coherent device memory nodes
from HugeTLB allocations.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
---
mm/hugetlb.c | 25 ++++++++++++++++---------
1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c7025c1..9a46d9f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1790,6 +1790,7 @@ static void return_unused_surplus_pages(struct hstate *h,
unsigned long unused_resv_pages)
{
unsigned long nr_pages;
+ nodemask_t system_mem = system_mem_nodemask();
/* Cannot return gigantic pages currently */
if (hstate_is_gigantic(h))
@@ -1816,7 +1817,7 @@ static void return_unused_surplus_pages(struct hstate *h,
while (nr_pages--) {
h->resv_huge_pages--;
unused_resv_pages--;
- if (!free_pool_huge_page(h, &node_states[N_MEMORY], 1))
+ if (!free_pool_huge_page(h, &system_mem, 1))
goto out;
cond_resched_lock(&hugetlb_lock);
}
@@ -2107,8 +2108,9 @@ int __weak alloc_bootmem_huge_page(struct hstate *h)
{
struct huge_bootmem_page *m;
int nr_nodes, node;
+ nodemask_t system_mem = system_mem_nodemask();
- for_each_node_mask_to_alloc(h, nr_nodes, node, &node_states[N_MEMORY]) {
+ for_each_node_mask_to_alloc(h, nr_nodes, node, &system_mem) {
void *addr;
addr = memblock_virt_alloc_try_nid_nopanic(
@@ -2177,13 +2179,14 @@ static void __init gather_bootmem_prealloc(void)
static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
{
unsigned long i;
+ nodemask_t system_mem = system_mem_nodemask();
+
for (i = 0; i < h->max_huge_pages; ++i) {
if (hstate_is_gigantic(h)) {
if (!alloc_bootmem_huge_page(h))
break;
- } else if (!alloc_fresh_huge_page(h,
- &node_states[N_MEMORY]))
+ } else if (!alloc_fresh_huge_page(h, &system_mem))
break;
}
h->max_huge_pages = i;
@@ -2420,6 +2423,8 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
unsigned long count, size_t len)
{
int err;
+ nodemask_t system_mem = system_mem_nodemask();
+
NODEMASK_ALLOC(nodemask_t, nodes_allowed, GFP_KERNEL | __GFP_NORETRY);
if (hstate_is_gigantic(h) && !gigantic_page_supported()) {
@@ -2434,7 +2439,7 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
if (!(obey_mempolicy &&
init_nodemask_of_mempolicy(nodes_allowed))) {
NODEMASK_FREE(nodes_allowed);
- nodes_allowed = &node_states[N_MEMORY];
+ nodes_allowed = &system_mem;
}
} else if (nodes_allowed) {
/*
@@ -2444,11 +2449,11 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
init_nodemask_of_node(nodes_allowed, nid);
} else
- nodes_allowed = &node_states[N_MEMORY];
+ nodes_allowed = &system_mem;
h->max_huge_pages = set_max_huge_pages(h, count, nodes_allowed);
- if (nodes_allowed != &node_states[N_MEMORY])
+ if (nodes_allowed != &system_mem)
NODEMASK_FREE(nodes_allowed);
return len;
@@ -2745,9 +2750,10 @@ static void hugetlb_register_node(struct node *node)
*/
static void __init hugetlb_register_all_nodes(void)
{
+ nodemask_t nodes = system_mem_nodemask();
int nid;
- for_each_node_state(nid, N_MEMORY) {
+ for_each_node_mask(nid, nodes) {
struct node *node = node_devices[nid];
if (node->dev.id == nid)
hugetlb_register_node(node);
@@ -3019,11 +3025,12 @@ void hugetlb_show_meminfo(void)
{
struct hstate *h;
int nid;
+ nodemask_t system_mem = system_mem_nodemask();
if (!hugepages_supported())
return;
- for_each_node_state(nid, N_MEMORY)
+ for_each_node_mask(nid, system_mem)
for_each_hstate(h)
pr_info("Node %d hugepages_total=%u hugepages_free=%u hugepages_surp=%u hugepages_size=%lukB\n",
nid,
--
2.9.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-02-15 12:07 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-15 12:07 [PATCH V3 0/4] Define coherent device memory node Anshuman Khandual
2017-02-15 12:07 ` [PATCH V3 1/4] mm: Define coherent device memory (CDM) node Anshuman Khandual
2017-02-17 14:05 ` Bob Liu
2017-02-21 10:20 ` Anshuman Khandual
2017-02-15 12:07 ` Anshuman Khandual [this message]
2017-02-15 12:07 ` [PATCH V3 3/4] mm: Add new parameter to get_page_from_freelist() function Anshuman Khandual
2017-02-15 12:07 ` [PATCH V3 4/4] mm: Enable Buddy allocation isolation for CDM nodes Anshuman Khandual
2017-02-15 18:20 ` [PATCH V3 0/4] Define coherent device memory node Mel Gorman
2017-02-16 22:14 ` Balbir Singh
2017-02-17 9:33 ` Mel Gorman
2017-02-21 2:57 ` Balbir Singh
2017-03-01 2:42 ` Balbir Singh
2017-03-01 9:55 ` Mel Gorman
2017-03-01 10:59 ` Balbir Singh
2017-03-08 9:04 ` Anshuman Khandual
2017-03-08 9:21 ` [PATCH 1/2] mm: Change generic FALLBACK zonelist creation process Anshuman Khandual
2017-03-08 11:07 ` John Hubbard
2017-03-14 13:33 ` Anshuman Khandual
2017-03-15 4:10 ` John Hubbard
2017-03-08 9:21 ` [PATCH 2/2] mm: Change mbind(MPOL_BIND) implementation for CDM nodes Anshuman Khandual
2017-02-17 11:41 ` [PATCH V3 0/4] Define coherent device memory node Anshuman Khandual
2017-02-17 13:32 ` Mel Gorman
2017-02-21 13:09 ` Anshuman Khandual
2017-02-21 20:14 ` Jerome Glisse
2017-02-23 8:14 ` Anshuman Khandual
2017-02-23 15:27 ` Jerome Glisse
2017-02-22 9:29 ` Michal Hocko
2017-02-22 14:59 ` Jerome Glisse
2017-02-22 16:54 ` Michal Hocko
2017-03-06 5:48 ` Anshuman Khandual
2017-02-23 8:52 ` Anshuman Khandual
2017-02-23 15:57 ` Mel Gorman
2017-03-06 5:12 ` Anshuman Khandual
2017-02-21 11:11 ` Michal Hocko
2017-02-21 13:39 ` Anshuman Khandual
2017-02-22 9:50 ` Michal Hocko
2017-02-23 6:52 ` Anshuman Khandual
2017-03-05 12:39 ` Anshuman Khandual
2017-02-24 1:06 ` Bob Liu
2017-02-24 4:39 ` John Hubbard
2017-02-24 4:53 ` Jerome Glisse
2017-02-27 1:56 ` Bob Liu
2017-02-27 5:41 ` Anshuman Khandual
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170215120726.9011-3-khandual@linux.vnet.ibm.com \
--to=khandual@linux.vnet.ibm.com \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=bsingharora@gmail.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=haren@linux.vnet.ibm.com \
--cc=jglisse@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.com \
--cc=minchan@kernel.org \
--cc=srikar@linux.vnet.ibm.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).