From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753969AbdCQAWz convert rfc822-to-8bit (ORCPT ); Thu, 16 Mar 2017 20:22:55 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:10247 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751483AbdCQAWy (ORCPT ); Thu, 16 Mar 2017 20:22:54 -0400 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 16 Mar 2017 17:22:52 -0700 Subject: Re: [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4 To: Andrew Morton , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= References: <1489680335-6594-1-git-send-email-jglisse@redhat.com> <1489680335-6594-8-git-send-email-jglisse@redhat.com> <20170316160520.d03ac02474cad6d2c8eba9bc@linux-foundation.org> CC: , , Naoya Horiguchi , David Nellans , Evgeny Baskakov , Mark Hairgrove , Sherry Cheung , Subhash Gutti X-Nvconfidentiality: public From: John Hubbard Message-ID: Date: Thu, 16 Mar 2017 17:22:51 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <20170316160520.d03ac02474cad6d2c8eba9bc@linux-foundation.org> X-Originating-IP: [172.17.160.221] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/16/2017 04:05 PM, Andrew Morton wrote: > On Thu, 16 Mar 2017 12:05:26 -0400 Jérôme Glisse wrote: > >> +static inline struct page *migrate_pfn_to_page(unsigned long mpfn) >> +{ >> + if (!(mpfn & MIGRATE_PFN_VALID)) >> + return NULL; >> + return pfn_to_page(mpfn & MIGRATE_PFN_MASK); >> +} > > i386 allnoconfig: > > In file included from mm/page_alloc.c:61: > ./include/linux/migrate.h: In function 'migrate_pfn_to_page': > ./include/linux/migrate.h:139: warning: left shift count >= width of type > ./include/linux/migrate.h:141: warning: left shift count >= width of type > ./include/linux/migrate.h: In function 'migrate_pfn_size': > ./include/linux/migrate.h:146: warning: left shift count >= width of type > It seems clear that this was never meant to work with < 64-bit pfns: // migrate.h excerpt: #define MIGRATE_PFN_VALID (1UL << (BITS_PER_LONG_LONG - 1)) #define MIGRATE_PFN_MIGRATE (1UL << (BITS_PER_LONG_LONG - 2)) #define MIGRATE_PFN_HUGE (1UL << (BITS_PER_LONG_LONG - 3)) #define MIGRATE_PFN_LOCKED (1UL << (BITS_PER_LONG_LONG - 4)) #define MIGRATE_PFN_WRITE (1UL << (BITS_PER_LONG_LONG - 5)) #define MIGRATE_PFN_DEVICE (1UL << (BITS_PER_LONG_LONG - 6)) #define MIGRATE_PFN_ERROR (1UL << (BITS_PER_LONG_LONG - 7)) #define MIGRATE_PFN_MASK ((1UL << (BITS_PER_LONG_LONG - PAGE_SHIFT)) - 1) ...obviously, there is not enough room for these flags, in a 32-bit pfn. So, given the current HMM design, I think we are going to have to provide a 32-bit version of these routines (migrate_pfn_to_page, and related) that is a no-op, right? thanks John Hubbard NVIDIA From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id 433366B038C for ; Thu, 16 Mar 2017 20:22:54 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id y6so93134492pfa.3 for ; Thu, 16 Mar 2017 17:22:54 -0700 (PDT) Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com. [216.228.121.65]) by mx.google.com with ESMTPS id w31si6807175pla.116.2017.03.16.17.22.53 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Mar 2017 17:22:53 -0700 (PDT) Subject: Re: [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4 References: <1489680335-6594-1-git-send-email-jglisse@redhat.com> <1489680335-6594-8-git-send-email-jglisse@redhat.com> <20170316160520.d03ac02474cad6d2c8eba9bc@linux-foundation.org> From: John Hubbard Message-ID: Date: Thu, 16 Mar 2017 17:22:51 -0700 MIME-Version: 1.0 In-Reply-To: <20170316160520.d03ac02474cad6d2c8eba9bc@linux-foundation.org> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: quoted-printable Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Naoya Horiguchi , David Nellans , Evgeny Baskakov , Mark Hairgrove , Sherry Cheung , Subhash Gutti On 03/16/2017 04:05 PM, Andrew Morton wrote: > On Thu, 16 Mar 2017 12:05:26 -0400 J=E9r=F4me Glisse = wrote: > >> +static inline struct page *migrate_pfn_to_page(unsigned long mpfn) >> +{ >> + if (!(mpfn & MIGRATE_PFN_VALID)) >> + return NULL; >> + return pfn_to_page(mpfn & MIGRATE_PFN_MASK); >> +} > > i386 allnoconfig: > > In file included from mm/page_alloc.c:61: > ./include/linux/migrate.h: In function 'migrate_pfn_to_page': > ./include/linux/migrate.h:139: warning: left shift count >=3D width of ty= pe > ./include/linux/migrate.h:141: warning: left shift count >=3D width of ty= pe > ./include/linux/migrate.h: In function 'migrate_pfn_size': > ./include/linux/migrate.h:146: warning: left shift count >=3D width of ty= pe > It seems clear that this was never meant to work with < 64-bit pfns: // migrate.h excerpt: #define MIGRATE_PFN_VALID (1UL << (BITS_PER_LONG_LONG - 1)) #define MIGRATE_PFN_MIGRATE (1UL << (BITS_PER_LONG_LONG - 2)) #define MIGRATE_PFN_HUGE (1UL << (BITS_PER_LONG_LONG - 3)) #define MIGRATE_PFN_LOCKED (1UL << (BITS_PER_LONG_LONG - 4)) #define MIGRATE_PFN_WRITE (1UL << (BITS_PER_LONG_LONG - 5)) #define MIGRATE_PFN_DEVICE (1UL << (BITS_PER_LONG_LONG - 6)) #define MIGRATE_PFN_ERROR (1UL << (BITS_PER_LONG_LONG - 7)) #define MIGRATE_PFN_MASK ((1UL << (BITS_PER_LONG_LONG - PAGE_SHIFT)) - 1) ...obviously, there is not enough room for these flags, in a 32-bit pfn. So, given the current HMM design, I think we are going to have to provide a= 32-bit version of these=20 routines (migrate_pfn_to_page, and related) that is a no-op, right? thanks John Hubbard NVIDIA -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org