From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 820DFC33CAA for ; Thu, 23 Jan 2020 07:47:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4880624125 for ; Thu, 23 Jan 2020 07:47:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4880624125 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CAC166B0006; Thu, 23 Jan 2020 02:47:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C8B766B0007; Thu, 23 Jan 2020 02:47:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B99376B0008; Thu, 23 Jan 2020 02:47:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id A21E36B0006 for ; Thu, 23 Jan 2020 02:47:21 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 5AC88180AD801 for ; Thu, 23 Jan 2020 07:47:21 +0000 (UTC) X-FDA: 76408118682.22.spade33_5d3497985ef61 X-HE-Tag: spade33_5d3497985ef61 X-Filterd-Recvd-Size: 7168 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Thu, 23 Jan 2020 07:47:20 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 449C71FB; Wed, 22 Jan 2020 23:47:19 -0800 (PST) Received: from [10.162.16.32] (p8cg001049571a15.blr.arm.com [10.162.16.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3AA2A3F6C4; Wed, 22 Jan 2020 23:50:52 -0800 (PST) Subject: Re: [PATCH v4] mm/mempolicy,hugetlb: Checking hstate for hugetlbfs page in vma_migratable To: Li Xinhai , "linux-mm@kvack.org" Cc: akpm , mhocko , Mike Kravetz References: <1579147885-23511-1-git-send-email-lixinhai.lxh@gmail.com> <364b46d3-6dbb-4793-6cfe-5e74e1278daf@arm.com> <2020012221211439644710@gmail.com> From: Anshuman Khandual Message-ID: <9e79bfa3-925c-2d8c-2a0b-ef690ed667eb@arm.com> Date: Thu, 23 Jan 2020 13:18:41 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <2020012221211439644710@gmail.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 01/22/2020 06:51 PM, Li Xinhai wrote: > On 2020-01-22=C2=A0at 14:35=C2=A0Anshuman Khandual=C2=A0wrote: >> >> >> On 01/16/2020 09:41 AM, Li Xinhai wrote: >>> Checking hstate at early phase when isolating page, instead of during >>> unmap and move phase, to avoid useless isolation. >>> >>> Signed-off-by: Li Xinhai >>> Cc: Michal Hocko >>> Cc: Mike Kravetz >>> --- >> >> Change log from the previous versions ?=20 >=20 > V2, V3 and V4 are all for using different ways to fix the circular refe= rence > of hugetlb.h and mempolicy.h. The exsiting relationship of these two fi= les > is allowing inline functions of hugetlb.h being able to refer > symbols defined in mempolicy.h, but no feasible way for inline function= s in > mempolicy.h to using functions in hugetlb.h. > After evaluated different fixes to this situation, current patch looks = better > , which no longer define vma_migratable as inline. Okay but please always include the change log. >=20 > Regarding to the new wrapper, yes it is not necessary. It is defined fo= r > checking at vma level, meant to provide different granularity for call = from > high level code(I noticed that in unmap_and_move_huge_page(), checking = is done > by hugepage_migration_supported(page_hstate(hpage)), and try using new = wrapper > which is different from that usage). If it looks too much redundant, ch= ange > it OK. Yeah its not really required as hstate can be easily derived from the VMA= . Please drop this new wrapper. >=20 >> >>> =C2=A0 include/linux/hugetlb.h=C2=A0=C2=A0 | 10 ++++++++++ >>> =C2=A0 include/linux/mempolicy.h | 29 +---------------------------- >>> =C2=A0 mm/mempolicy.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 | 28 ++++++++++++++++++++++++++++ >>> =C2=A0 3 files changed, 39 insertions(+), 28 deletions(-) >>> >>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h >>> index 31d4920..c9d871d 100644 >>> --- a/include/linux/hugetlb.h >>> +++ b/include/linux/hugetlb.h >>> @@ -598,6 +598,11 @@ static inline bool hugepage_migration_supported(= struct hstate *h) >>> =C2=A0 return arch_hugetlb_migration_supported(h); >>> =C2=A0 } >>> =C2=A0 >>> +static inline bool vm_hugepage_migration_supported(struct vm_area_st= ruct *vma) >>> +{ >>> + return hugepage_migration_supported(hstate_vma(vma)); >>> +} >> >> Another wrapper around hugepage_migration_supported() is not necessary= .=20 >=20 > Reason as above. >> >>> + >>> =C2=A0 /* >>> =C2=A0=C2=A0 * Movability check is different as compared to migration= check. >>> =C2=A0=C2=A0 * It determines whether or not a huge page should be pla= ced on >>> @@ -809,6 +814,11 @@ static inline bool hugepage_migration_supported(= struct hstate *h) >>> =C2=A0 return false; >>> =C2=A0 } >>> =C2=A0 >>> +static inline bool vm_hugepage_migration_supported(struct vm_area_st= ruct *vma) >>> +{ >>> + return false; >>> +} >>> + >>> =C2=A0 static inline bool hugepage_movable_supported(struct hstate *h= ) >>> =C2=A0 { >>> =C2=A0 return false; >>> diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h >>> index 5228c62..8165278 100644 >>> --- a/include/linux/mempolicy.h >>> +++ b/include/linux/mempolicy.h >>> @@ -173,34 +173,7 @@ int do_migrate_pages(struct mm_struct *mm, const= nodemask_t *from, >>> =C2=A0 extern void mpol_to_str(char *buffer, int maxlen, struct mempo= licy *pol); >>> =C2=A0 >>> =C2=A0 /* Check if a vma is migratable */ >>> -static inline bool vma_migratable(struct vm_area_struct *vma) >>> -{ >>> - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) >>> - return false; >>> - >>> - /* >>> - * DAX device mappings require predictable access latency, so avoid >>> - * incurring periodic faults. >>> - */ >>> - if (vma_is_dax(vma)) >>> - return false; >>> - >>> -#ifndef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION >>> - if (vma->vm_flags & VM_HUGETLB) >>> - return false; >>> -#endif >>> - >>> - /* >>> - * Migration allocates pages in the highest zone. If we cannot >>> - * do so then migration (at least from node to node) is not >>> - * possible. >>> - */ >>> - if (vma->vm_file && >>> - gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) >>> - < policy_zone) >>> - return false; >>> - return true; >>> -} >> >> Why vma_migratable() is being moved ?=20 > Reason as above. >=20 >> >>> +extern bool vma_migratable(struct vm_area_struct *vma); >>> =C2=A0 >>> =C2=A0 extern int mpol_misplaced(struct page *, struct vm_area_struct= *, unsigned long); >>> =C2=A0 extern void mpol_put_task_policy(struct task_struct *); >>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c >>> index 067cf7d..8a01fb1 100644 >>> --- a/mm/mempolicy.c >>> +++ b/mm/mempolicy.c >>> @@ -1714,6 +1714,34 @@ static int kernel_get_mempolicy(int __user *po= licy, >>> =C2=A0 >>> =C2=A0 #endif /* CONFIG_COMPAT */ >>> =C2=A0 >>> +bool vma_migratable(struct vm_area_struct *vma) >>> +{ >>> + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) >>> + return false; >>> + >>> + /* >>> + * DAX device mappings require predictable access latency, so avoid >>> + * incurring periodic faults. >>> + */ >>> + if (vma_is_dax(vma)) >>> + return false; >>> + >>> + if (is_vm_hugetlb_page(vma) && >>> + !vm_hugepage_migration_supported(vma)) >>> + return false; >> >> This (use hugepage_migration_supported instead) can be added above wit= hout >> the code movement. > Reason as above. >> >>> + >>> + /* >>> + * Migration allocates pages in the highest zone. If we cannot >>> + * do so then migration (at least from node to node) is not >>> + * possible. >>> + */ >>> + if (vma->vm_file && >>> + gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) >>> + < policy_zone) >>> + return false; >>> + return true; >>> +} >>> + >>> =C2=A0 struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, >>> =C2=A0 unsigned long addr) >>> =C2=A0 { >> >