From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751057AbdCQHWM (ORCPT ); Fri, 17 Mar 2017 03:22:12 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:16621 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750877AbdCQHWK (ORCPT ); Fri, 17 Mar 2017 03:22:10 -0400 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Fri, 17 Mar 2017 00:22:09 -0700 Subject: Re: [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4 To: Balbir Singh References: <1489680335-6594-1-git-send-email-jglisse@redhat.com> <1489680335-6594-8-git-send-email-jglisse@redhat.com> <20170316160520.d03ac02474cad6d2c8eba9bc@linux-foundation.org> <94e0d115-7deb-c748-3dc2-60d6289e6551@nvidia.com> CC: Andrew Morton , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , "linux-kernel@vger.kernel.org" , linux-mm , Naoya Horiguchi , David Nellans , Evgeny Baskakov , Mark Hairgrove , Sherry Cheung , Subhash Gutti X-Nvconfidentiality: public From: John Hubbard Message-ID: <8c26baee-c681-a03d-4021-f9f92182e71f@nvidia.com> Date: Fri, 17 Mar 2017 00:17:15 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: X-Originating-IP: [10.2.168.151] X-ClientProxiedBy: HQMAIL106.nvidia.com (172.18.146.12) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/16/2017 09:51 PM, Balbir Singh wrote: [...] > So this is what I ended up with, a quick fix for the 32 bit > build failures > > Date: Fri, 17 Mar 2017 15:42:52 +1100 > Subject: [PATCH] mm/hmm: Fix build on 32 bit systems > > Fix build breakage of hmm-v18 in the current mmotm by > making the migrate_vma() and related functions 64 > bit only. The 32 bit variant will return -EINVAL. > There are other approaches to solving this problem, > but we can enable 32 bit systems as we need them. > > This patch tries to limit the impact on 32 bit systems > by turning HMM off on them and not enabling the migrate > functions. > > I've built this on ppc64/i386 and x86_64 > > Signed-off-by: Balbir Singh > --- > include/linux/migrate.h | 18 +++++++++++++++++- > mm/Kconfig | 4 +++- > mm/migrate.c | 3 ++- > 3 files changed, 22 insertions(+), 3 deletions(-) > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index 01f4945..1888a70 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -124,7 +124,7 @@ static inline int migrate_misplaced_transhuge_page(struct mm_struct *mm, > } > #endif /* CONFIG_NUMA_BALANCING && CONFIG_TRANSPARENT_HUGEPAGE*/ > > - > +#ifdef CONFIG_64BIT > #define MIGRATE_PFN_VALID (1UL << (BITS_PER_LONG_LONG - 1)) > #define MIGRATE_PFN_MIGRATE (1UL << (BITS_PER_LONG_LONG - 2)) > #define MIGRATE_PFN_HUGE (1UL << (BITS_PER_LONG_LONG - 3)) As long as we're getting this accurate, should we make that 1ULL, in all of the MIGRATE_PFN_* defines? The 1ULL is what determines the type of the resulting number, so it's one more tiny piece of type correctness that is good to have. The rest of this fix looks good, and the above is not technically necessary (the code that uses it will force its own type anyway), so: Reviewed-by: John Hubbard thanks John Hubbard NVIDIA > @@ -145,6 +145,7 @@ static inline unsigned long migrate_pfn_size(unsigned long mpfn) > { > return mpfn & MIGRATE_PFN_HUGE ? PMD_SIZE : PAGE_SIZE; > } > +#endif > > /* > * struct migrate_vma_ops - migrate operation callback > @@ -194,6 +195,7 @@ struct migrate_vma_ops { > void *private); > }; > > +#ifdef CONFIG_64BIT > int migrate_vma(const struct migrate_vma_ops *ops, > struct vm_area_struct *vma, > unsigned long mentries, > @@ -202,5 +204,19 @@ int migrate_vma(const struct migrate_vma_ops *ops, > unsigned long *src, > unsigned long *dst, > void *private); > +#else > +static inline int migrate_vma(const struct migrate_vma_ops *ops, > + struct vm_area_struct *vma, > + unsigned long mentries, > + unsigned long start, > + unsigned long end, > + unsigned long *src, > + unsigned long *dst, > + void *private) > +{ > + return -EINVAL; > +} > +#endif > + > > #endif /* _LINUX_MIGRATE_H */ > diff --git a/mm/Kconfig b/mm/Kconfig > index a430d51..c13677f 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -291,7 +291,7 @@ config ARCH_ENABLE_HUGEPAGE_MIGRATION > > config HMM > bool > - depends on MMU > + depends on MMU && 64BIT > > config HMM_MIRROR > bool "HMM mirror CPU page table into a device page table" > @@ -307,6 +307,7 @@ config HMM_MIRROR > Second side of the equation is replicating CPU page table content for > range of virtual address. This require careful synchronization with > CPU page table update. > + depends on 64BIT > > config HMM_DEVMEM > bool "HMM device memory helpers (to leverage ZONE_DEVICE)" > @@ -314,6 +315,7 @@ config HMM_DEVMEM > help > HMM devmem are helpers to leverage new ZONE_DEVICE feature. This is > just to avoid device driver to replicate boiler plate code. > + depends on 64BIT > > config PHYS_ADDR_T_64BIT > def_bool 64BIT || ARCH_PHYS_ADDR_T_64BIT > diff --git a/mm/migrate.c b/mm/migrate.c > index b9d25d1..15f2972 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -2080,7 +2080,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, > > #endif /* CONFIG_NUMA */ > > - > +#ifdef CONFIG_64BIT > struct migrate_vma { > struct vm_area_struct *vma; > unsigned long *dst; > @@ -2787,3 +2787,4 @@ int migrate_vma(const struct migrate_vma_ops *ops, > return 0; > } > EXPORT_SYMBOL(migrate_vma); > +#endif > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id EE7926B038A for ; Fri, 17 Mar 2017 03:22:10 -0400 (EDT) Received: by mail-pf0-f199.google.com with SMTP id x63so120000630pfx.7 for ; Fri, 17 Mar 2017 00:22:10 -0700 (PDT) Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com. [216.228.121.65]) by mx.google.com with ESMTPS id 1si7737165pgl.232.2017.03.17.00.22.09 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Mar 2017 00:22:10 -0700 (PDT) Subject: Re: [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4 References: <1489680335-6594-1-git-send-email-jglisse@redhat.com> <1489680335-6594-8-git-send-email-jglisse@redhat.com> <20170316160520.d03ac02474cad6d2c8eba9bc@linux-foundation.org> <94e0d115-7deb-c748-3dc2-60d6289e6551@nvidia.com> From: John Hubbard Message-ID: <8c26baee-c681-a03d-4021-f9f92182e71f@nvidia.com> Date: Fri, 17 Mar 2017 00:17:15 -0700 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Balbir Singh Cc: Andrew Morton , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , "linux-kernel@vger.kernel.org" , linux-mm , Naoya Horiguchi , David Nellans , Evgeny Baskakov , Mark Hairgrove , Sherry Cheung , Subhash Gutti On 03/16/2017 09:51 PM, Balbir Singh wrote: [...] > So this is what I ended up with, a quick fix for the 32 bit > build failures > > Date: Fri, 17 Mar 2017 15:42:52 +1100 > Subject: [PATCH] mm/hmm: Fix build on 32 bit systems > > Fix build breakage of hmm-v18 in the current mmotm by > making the migrate_vma() and related functions 64 > bit only. The 32 bit variant will return -EINVAL. > There are other approaches to solving this problem, > but we can enable 32 bit systems as we need them. > > This patch tries to limit the impact on 32 bit systems > by turning HMM off on them and not enabling the migrate > functions. > > I've built this on ppc64/i386 and x86_64 > > Signed-off-by: Balbir Singh > --- > include/linux/migrate.h | 18 +++++++++++++++++- > mm/Kconfig | 4 +++- > mm/migrate.c | 3 ++- > 3 files changed, 22 insertions(+), 3 deletions(-) > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index 01f4945..1888a70 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -124,7 +124,7 @@ static inline int migrate_misplaced_transhuge_page(struct mm_struct *mm, > } > #endif /* CONFIG_NUMA_BALANCING && CONFIG_TRANSPARENT_HUGEPAGE*/ > > - > +#ifdef CONFIG_64BIT > #define MIGRATE_PFN_VALID (1UL << (BITS_PER_LONG_LONG - 1)) > #define MIGRATE_PFN_MIGRATE (1UL << (BITS_PER_LONG_LONG - 2)) > #define MIGRATE_PFN_HUGE (1UL << (BITS_PER_LONG_LONG - 3)) As long as we're getting this accurate, should we make that 1ULL, in all of the MIGRATE_PFN_* defines? The 1ULL is what determines the type of the resulting number, so it's one more tiny piece of type correctness that is good to have. The rest of this fix looks good, and the above is not technically necessary (the code that uses it will force its own type anyway), so: Reviewed-by: John Hubbard thanks John Hubbard NVIDIA > @@ -145,6 +145,7 @@ static inline unsigned long migrate_pfn_size(unsigned long mpfn) > { > return mpfn & MIGRATE_PFN_HUGE ? PMD_SIZE : PAGE_SIZE; > } > +#endif > > /* > * struct migrate_vma_ops - migrate operation callback > @@ -194,6 +195,7 @@ struct migrate_vma_ops { > void *private); > }; > > +#ifdef CONFIG_64BIT > int migrate_vma(const struct migrate_vma_ops *ops, > struct vm_area_struct *vma, > unsigned long mentries, > @@ -202,5 +204,19 @@ int migrate_vma(const struct migrate_vma_ops *ops, > unsigned long *src, > unsigned long *dst, > void *private); > +#else > +static inline int migrate_vma(const struct migrate_vma_ops *ops, > + struct vm_area_struct *vma, > + unsigned long mentries, > + unsigned long start, > + unsigned long end, > + unsigned long *src, > + unsigned long *dst, > + void *private) > +{ > + return -EINVAL; > +} > +#endif > + > > #endif /* _LINUX_MIGRATE_H */ > diff --git a/mm/Kconfig b/mm/Kconfig > index a430d51..c13677f 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -291,7 +291,7 @@ config ARCH_ENABLE_HUGEPAGE_MIGRATION > > config HMM > bool > - depends on MMU > + depends on MMU && 64BIT > > config HMM_MIRROR > bool "HMM mirror CPU page table into a device page table" > @@ -307,6 +307,7 @@ config HMM_MIRROR > Second side of the equation is replicating CPU page table content for > range of virtual address. This require careful synchronization with > CPU page table update. > + depends on 64BIT > > config HMM_DEVMEM > bool "HMM device memory helpers (to leverage ZONE_DEVICE)" > @@ -314,6 +315,7 @@ config HMM_DEVMEM > help > HMM devmem are helpers to leverage new ZONE_DEVICE feature. This is > just to avoid device driver to replicate boiler plate code. > + depends on 64BIT > > config PHYS_ADDR_T_64BIT > def_bool 64BIT || ARCH_PHYS_ADDR_T_64BIT > diff --git a/mm/migrate.c b/mm/migrate.c > index b9d25d1..15f2972 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -2080,7 +2080,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, > > #endif /* CONFIG_NUMA */ > > - > +#ifdef CONFIG_64BIT > struct migrate_vma { > struct vm_area_struct *vma; > unsigned long *dst; > @@ -2787,3 +2787,4 @@ int migrate_vma(const struct migrate_vma_ops *ops, > return 0; > } > EXPORT_SYMBOL(migrate_vma); > +#endif > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org