From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98DBEC32771 for ; Wed, 22 Jan 2020 06:04:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6196124656 for ; Wed, 22 Jan 2020 06:04:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6196124656 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DABAE6B0003; Wed, 22 Jan 2020 01:04:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D5D346B0005; Wed, 22 Jan 2020 01:04:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C73336B0007; Wed, 22 Jan 2020 01:04:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0204.hostedemail.com [216.40.44.204]) by kanga.kvack.org (Postfix) with ESMTP id B2E116B0003 for ; Wed, 22 Jan 2020 01:04:43 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 7691940D7 for ; Wed, 22 Jan 2020 06:04:43 +0000 (UTC) X-FDA: 76404231246.19.wall83_c11acd70152f X-HE-Tag: wall83_c11acd70152f X-Filterd-Recvd-Size: 5524 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Wed, 22 Jan 2020 06:04:42 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D1DCD1FB; Tue, 21 Jan 2020 22:04:41 -0800 (PST) Received: from [10.163.1.202] (unknown [10.163.1.202]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 98FE33F68E; Tue, 21 Jan 2020 22:04:38 -0800 (PST) Subject: Re: [PATCH v4] mm/mempolicy,hugetlb: Checking hstate for hugetlbfs page in vma_migratable To: Li Xinhai , linux-mm@kvack.org Cc: akpm@linux-foundation.org, Michal Hocko , Mike Kravetz References: <1579147885-23511-1-git-send-email-lixinhai.lxh@gmail.com> From: Anshuman Khandual Message-ID: <364b46d3-6dbb-4793-6cfe-5e74e1278daf@arm.com> Date: Wed, 22 Jan 2020 11:35:58 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <1579147885-23511-1-git-send-email-lixinhai.lxh@gmail.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 01/16/2020 09:41 AM, Li Xinhai wrote: > Checking hstate at early phase when isolating page, instead of during > unmap and move phase, to avoid useless isolation. > > Signed-off-by: Li Xinhai > Cc: Michal Hocko > Cc: Mike Kravetz > --- Change log from the previous versions ? > include/linux/hugetlb.h | 10 ++++++++++ > include/linux/mempolicy.h | 29 +---------------------------- > mm/mempolicy.c | 28 ++++++++++++++++++++++++++++ > 3 files changed, 39 insertions(+), 28 deletions(-) > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index 31d4920..c9d871d 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -598,6 +598,11 @@ static inline bool hugepage_migration_supported(struct hstate *h) > return arch_hugetlb_migration_supported(h); > } > > +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma) > +{ > + return hugepage_migration_supported(hstate_vma(vma)); > +} Another wrapper around hugepage_migration_supported() is not necessary. > + > /* > * Movability check is different as compared to migration check. > * It determines whether or not a huge page should be placed on > @@ -809,6 +814,11 @@ static inline bool hugepage_migration_supported(struct hstate *h) > return false; > } > > +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma) > +{ > + return false; > +} > + > static inline bool hugepage_movable_supported(struct hstate *h) > { > return false; > diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h > index 5228c62..8165278 100644 > --- a/include/linux/mempolicy.h > +++ b/include/linux/mempolicy.h > @@ -173,34 +173,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, > extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol); > > /* Check if a vma is migratable */ > -static inline bool vma_migratable(struct vm_area_struct *vma) > -{ > - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) > - return false; > - > - /* > - * DAX device mappings require predictable access latency, so avoid > - * incurring periodic faults. > - */ > - if (vma_is_dax(vma)) > - return false; > - > -#ifndef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION > - if (vma->vm_flags & VM_HUGETLB) > - return false; > -#endif > - > - /* > - * Migration allocates pages in the highest zone. If we cannot > - * do so then migration (at least from node to node) is not > - * possible. > - */ > - if (vma->vm_file && > - gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) > - < policy_zone) > - return false; > - return true; > -} Why vma_migratable() is being moved ? > +extern bool vma_migratable(struct vm_area_struct *vma); > > extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned long); > extern void mpol_put_task_policy(struct task_struct *); > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index 067cf7d..8a01fb1 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -1714,6 +1714,34 @@ static int kernel_get_mempolicy(int __user *policy, > > #endif /* CONFIG_COMPAT */ > > +bool vma_migratable(struct vm_area_struct *vma) > +{ > + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) > + return false; > + > + /* > + * DAX device mappings require predictable access latency, so avoid > + * incurring periodic faults. > + */ > + if (vma_is_dax(vma)) > + return false; > + > + if (is_vm_hugetlb_page(vma) && > + !vm_hugepage_migration_supported(vma)) > + return false; This (use hugepage_migration_supported instead) can be added above without the code movement. > + > + /* > + * Migration allocates pages in the highest zone. If we cannot > + * do so then migration (at least from node to node) is not > + * possible. > + */ > + if (vma->vm_file && > + gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) > + < policy_zone) > + return false; > + return true; > +} > + > struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, > unsigned long addr) > { >