From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754039AbaFPHCp (ORCPT ); Mon, 16 Jun 2014 03:02:45 -0400 Received: from e28smtp01.in.ibm.com ([122.248.162.1]:36405 "EHLO e28smtp01.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753931AbaFPHCn (ORCPT ); Mon, 16 Jun 2014 03:02:43 -0400 From: "Aneesh Kumar K.V" To: Joonsoo Kim Cc: Andrew Morton , Marek Szyprowski , Michal Nazarewicz , Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework In-Reply-To: <20140616053408.GH23210@js1304-P5Q-DELUXE> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> <20140616053408.GH23210@js1304-P5Q-DELUXE> User-Agent: Notmuch/0.18~rc0+2~gbc64cdc (http://notmuchmail.org) Emacs/24.3.91.1 (x86_64-unknown-linux-gnu) Date: Mon, 16 Jun 2014 12:32:24 +0530 Message-ID: <87wqchcp6n.fsf@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14061607-4790-0000-0000-0000021222AB Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Joonsoo Kim writes: > On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: >> Joonsoo Kim writes: >> >> > Now, we have general CMA reserved area management framework, >> > so use it for future maintainabilty. There is no functional change. >> > >> > Acked-by: Michal Nazarewicz >> > Acked-by: Paolo Bonzini >> > Signed-off-by: Joonsoo Kim >> >> Need this. We may want to keep the VM_BUG_ON by moving >> KVM_CMA_CHUNK_ORDER around. >> >> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> index 8056107..1932e0e 100644 >> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c >> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> @@ -37,8 +37,6 @@ >> #include >> #include >> >> -#include "book3s_hv_cma.h" >> - >> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ >> #define MAX_LPID_970 63 >> >> @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) >> } >> >> kvm->arch.hpt_cma_alloc = 0; >> - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); >> page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); >> if (page) { >> hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); >> >> >> >> -aneesh > > Okay. > So do you also want this? > > @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) > { > unsigned long align_pages = HPT_ALIGN_PAGES; > > + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); > + > /* Old CPUs require HPT aligned on a multiple of its size */ > if (!cpu_has_feature(CPU_FTR_ARCH_206)) > align_pages = nr_pages; > - return kvm_alloc_cma(nr_pages, align_pages); > + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); > } That would also work. Thanks -aneesh From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Aneesh Kumar K.V" Subject: Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework Date: Mon, 16 Jun 2014 12:32:24 +0530 Message-ID: <87wqchcp6n.fsf@linux.vnet.ibm.com> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> <20140616053408.GH23210@js1304-P5Q-DELUXE> Mime-Version: 1.0 Content-Type: text/plain Cc: Andrew Morton , Marek Szyprowski , Michal Nazarewicz , Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org To: Joonsoo Kim Return-path: In-Reply-To: <20140616053408.GH23210@js1304-P5Q-DELUXE> Sender: owner-linux-mm@kvack.org List-Id: kvm.vger.kernel.org Joonsoo Kim writes: > On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: >> Joonsoo Kim writes: >> >> > Now, we have general CMA reserved area management framework, >> > so use it for future maintainabilty. There is no functional change. >> > >> > Acked-by: Michal Nazarewicz >> > Acked-by: Paolo Bonzini >> > Signed-off-by: Joonsoo Kim >> >> Need this. We may want to keep the VM_BUG_ON by moving >> KVM_CMA_CHUNK_ORDER around. >> >> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> index 8056107..1932e0e 100644 >> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c >> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> @@ -37,8 +37,6 @@ >> #include >> #include >> >> -#include "book3s_hv_cma.h" >> - >> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ >> #define MAX_LPID_970 63 >> >> @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) >> } >> >> kvm->arch.hpt_cma_alloc = 0; >> - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); >> page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); >> if (page) { >> hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); >> >> >> >> -aneesh > > Okay. > So do you also want this? > > @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) > { > unsigned long align_pages = HPT_ALIGN_PAGES; > > + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); > + > /* Old CPUs require HPT aligned on a multiple of its size */ > if (!cpu_has_feature(CPU_FTR_ARCH_206)) > align_pages = nr_pages; > - return kvm_alloc_cma(nr_pages, align_pages); > + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); > } That would also work. Thanks -aneesh -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e28smtp02.in.ibm.com (e28smtp02.in.ibm.com [122.248.162.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id BAAE71A0326 for ; Mon, 16 Jun 2014 17:02:45 +1000 (EST) Received: from /spool/local by e28smtp02.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 16 Jun 2014 12:32:41 +0530 Received: from d28relay04.in.ibm.com (d28relay04.in.ibm.com [9.184.220.61]) by d28dlp03.in.ibm.com (Postfix) with ESMTP id 1BE4D125804D for ; Mon, 16 Jun 2014 12:32:02 +0530 (IST) Received: from d28av05.in.ibm.com (d28av05.in.ibm.com [9.184.220.67]) by d28relay04.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id s5G72mK560555506 for ; Mon, 16 Jun 2014 12:32:48 +0530 Received: from d28av05.in.ibm.com (localhost [127.0.0.1]) by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s5G72Wsa000909 for ; Mon, 16 Jun 2014 12:32:34 +0530 From: "Aneesh Kumar K.V" To: Joonsoo Kim Subject: Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework In-Reply-To: <20140616053408.GH23210@js1304-P5Q-DELUXE> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> <20140616053408.GH23210@js1304-P5Q-DELUXE> Date: Mon, 16 Jun 2014 12:32:24 +0530 Message-ID: <87wqchcp6n.fsf@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain Cc: Russell King - ARM Linux , kvm@vger.kernel.org, linux-mm@kvack.org, Gleb Natapov , Greg Kroah-Hartman , Alexander Graf , Michal Nazarewicz , linux-kernel@vger.kernel.org, Minchan Kim , Paul Mackerras , kvm-ppc@vger.kernel.org, Paolo Bonzini , Andrew Morton , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, Marek Szyprowski List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Joonsoo Kim writes: > On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: >> Joonsoo Kim writes: >> >> > Now, we have general CMA reserved area management framework, >> > so use it for future maintainabilty. There is no functional change. >> > >> > Acked-by: Michal Nazarewicz >> > Acked-by: Paolo Bonzini >> > Signed-off-by: Joonsoo Kim >> >> Need this. We may want to keep the VM_BUG_ON by moving >> KVM_CMA_CHUNK_ORDER around. >> >> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> index 8056107..1932e0e 100644 >> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c >> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> @@ -37,8 +37,6 @@ >> #include >> #include >> >> -#include "book3s_hv_cma.h" >> - >> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ >> #define MAX_LPID_970 63 >> >> @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) >> } >> >> kvm->arch.hpt_cma_alloc = 0; >> - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); >> page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); >> if (page) { >> hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); >> >> >> >> -aneesh > > Okay. > So do you also want this? > > @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) > { > unsigned long align_pages = HPT_ALIGN_PAGES; > > + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); > + > /* Old CPUs require HPT aligned on a multiple of its size */ > if (!cpu_has_feature(CPU_FTR_ARCH_206)) > align_pages = nr_pages; > - return kvm_alloc_cma(nr_pages, align_pages); > + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); > } That would also work. Thanks -aneesh From mboxrd@z Thu Jan 1 00:00:00 1970 From: aneesh.kumar@linux.vnet.ibm.com (Aneesh Kumar K.V) Date: Mon, 16 Jun 2014 12:32:24 +0530 Subject: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework In-Reply-To: <20140616053408.GH23210@js1304-P5Q-DELUXE> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> <20140616053408.GH23210@js1304-P5Q-DELUXE> Message-ID: <87wqchcp6n.fsf@linux.vnet.ibm.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org Joonsoo Kim writes: > On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: >> Joonsoo Kim writes: >> >> > Now, we have general CMA reserved area management framework, >> > so use it for future maintainabilty. There is no functional change. >> > >> > Acked-by: Michal Nazarewicz >> > Acked-by: Paolo Bonzini >> > Signed-off-by: Joonsoo Kim >> >> Need this. We may want to keep the VM_BUG_ON by moving >> KVM_CMA_CHUNK_ORDER around. >> >> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> index 8056107..1932e0e 100644 >> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c >> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> @@ -37,8 +37,6 @@ >> #include >> #include >> >> -#include "book3s_hv_cma.h" >> - >> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ >> #define MAX_LPID_970 63 >> >> @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) >> } >> >> kvm->arch.hpt_cma_alloc = 0; >> - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); >> page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); >> if (page) { >> hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); >> >> >> >> -aneesh > > Okay. > So do you also want this? > > @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) > { > unsigned long align_pages = HPT_ALIGN_PAGES; > > + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); > + > /* Old CPUs require HPT aligned on a multiple of its size */ > if (!cpu_has_feature(CPU_FTR_ARCH_206)) > align_pages = nr_pages; > - return kvm_alloc_cma(nr_pages, align_pages); > + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); > } That would also work. Thanks -aneesh From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Aneesh Kumar K.V" Date: Mon, 16 Jun 2014 07:14:24 +0000 Subject: Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework Message-Id: <87wqchcp6n.fsf@linux.vnet.ibm.com> List-Id: References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> <20140616053408.GH23210@js1304-P5Q-DELUXE> In-Reply-To: <20140616053408.GH23210@js1304-P5Q-DELUXE> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Joonsoo Kim Cc: Andrew Morton , Marek Szyprowski , Michal Nazarewicz , Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Joonsoo Kim writes: > On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: >> Joonsoo Kim writes: >> >> > Now, we have general CMA reserved area management framework, >> > so use it for future maintainabilty. There is no functional change. >> > >> > Acked-by: Michal Nazarewicz >> > Acked-by: Paolo Bonzini >> > Signed-off-by: Joonsoo Kim >> >> Need this. We may want to keep the VM_BUG_ON by moving >> KVM_CMA_CHUNK_ORDER around. >> >> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> index 8056107..1932e0e 100644 >> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c >> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c >> @@ -37,8 +37,6 @@ >> #include >> #include >> >> -#include "book3s_hv_cma.h" >> - >> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ >> #define MAX_LPID_970 63 >> >> @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) >> } >> >> kvm->arch.hpt_cma_alloc = 0; >> - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); >> page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); >> if (page) { >> hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); >> >> >> >> -aneesh > > Okay. > So do you also want this? > > @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) > { > unsigned long align_pages = HPT_ALIGN_PAGES; > > + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); > + > /* Old CPUs require HPT aligned on a multiple of its size */ > if (!cpu_has_feature(CPU_FTR_ARCH_206)) > align_pages = nr_pages; > - return kvm_alloc_cma(nr_pages, align_pages); > + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); > } That would also work. Thanks -aneesh