From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753373AbbDCBlW (ORCPT ); Thu, 2 Apr 2015 21:41:22 -0400 Received: from mail-ig0-f173.google.com ([209.85.213.173]:35859 "EHLO mail-ig0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752507AbbDCBlV (ORCPT ); Thu, 2 Apr 2015 21:41:21 -0400 Date: Thu, 2 Apr 2015 18:41:18 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: Michal Hocko , Vlastimil Babka , Johannes Weiner , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [patch] mm, memcg: sync allocation and memcg charge gfp flags for thp fix fix In-Reply-To: <20150318161407.GP17241@dhcp22.suse.cz> Message-ID: References: <1426514892-7063-1-git-send-email-mhocko@suse.cz> <55098D0A.8090605@suse.cz> <20150318150257.GL17241@dhcp22.suse.cz> <55099C72.1080102@suse.cz> <20150318155905.GO17241@dhcp22.suse.cz> <5509A31C.3070108@suse.cz> <20150318161407.GP17241@dhcp22.suse.cz> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org "mm, memcg: sync allocation and memcg charge gfp flags for THP" in -mm introduces a formal to pass the gfp mask for khugepaged's hugepage allocation. This is just too ugly to live. alloc_hugepage_gfpmask() cannot differ between NUMA and UMA configs by anything in GFP_RECLAIM_MASK, which is the only thing that matters for memcg reclaim, so just determine the gfp flags once in collapse_huge_page() and avoid the complexity. Signed-off-by: David Rientjes --- -mm: intended to be folded into mm-memcg-sync-allocation-and-memcg-charge-gfp-flags-for-thp.patch mm/huge_memory.c | 21 ++++++++------------- 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2373,16 +2373,12 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) } static struct page * -khugepaged_alloc_page(struct page **hpage, gfp_t *gfp, struct mm_struct *mm, +khugepaged_alloc_page(struct page **hpage, gfp_t gfp, struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, int node) { VM_BUG_ON_PAGE(*hpage, *hpage); - /* Only allocate from the target node */ - *gfp = alloc_hugepage_gfpmask(khugepaged_defrag(), __GFP_OTHER_NODE) | - __GFP_THISNODE; - /* * Before allocating the hugepage, release the mmap_sem read lock. * The allocation can take potentially a long time if it involves @@ -2391,7 +2387,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t *gfp, struct mm_struct *mm, */ up_read(&mm->mmap_sem); - *hpage = alloc_pages_exact_node(node, *gfp, HPAGE_PMD_ORDER); + *hpage = alloc_pages_exact_node(node, gfp, HPAGE_PMD_ORDER); if (unlikely(!*hpage)) { count_vm_event(THP_COLLAPSE_ALLOC_FAILED); *hpage = ERR_PTR(-ENOMEM); @@ -2445,18 +2441,13 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) } static struct page * -khugepaged_alloc_page(struct page **hpage, gfp_t *gfp, struct mm_struct *mm, +khugepaged_alloc_page(struct page **hpage, gfp_t gfp, struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, int node) { up_read(&mm->mmap_sem); VM_BUG_ON(!*hpage); - /* - * khugepaged_alloc_hugepage is doing the preallocation, use the same - * gfp flags here. - */ - *gfp = alloc_hugepage_gfpmask(khugepaged_defrag(), 0); return *hpage; } #endif @@ -2495,8 +2486,12 @@ static void collapse_huge_page(struct mm_struct *mm, VM_BUG_ON(address & ~HPAGE_PMD_MASK); + /* Only allocate from the target node */ + gfp = alloc_hugepage_gfpmask(khugepaged_defrag(), __GFP_OTHER_NODE) | + __GFP_THISNODE; + /* release the mmap_sem read lock. */ - new_page = khugepaged_alloc_page(hpage, &gfp, mm, vma, address, node); + new_page = khugepaged_alloc_page(hpage, gfp, mm, vma, address, node); if (!new_page) return;