From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753685AbaFPFaA (ORCPT ); Mon, 16 Jun 2014 01:30:00 -0400 Received: from lgeamrelo02.lge.com ([156.147.1.126]:57908 "EHLO lgeamrelo02.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753542AbaFPF35 (ORCPT ); Mon, 16 Jun 2014 01:29:57 -0400 X-Original-SENDERIP: 10.177.220.145 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Mon, 16 Jun 2014 14:34:08 +0900 From: Joonsoo Kim To: "Aneesh Kumar K.V" Cc: Andrew Morton , Marek Szyprowski , Michal Nazarewicz , Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework Message-ID: <20140616053408.GH23210@js1304-P5Q-DELUXE> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87mwdfg9co.fsf@linux.vnet.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > Now, we have general CMA reserved area management framework, > > so use it for future maintainabilty. There is no functional change. > > > > Acked-by: Michal Nazarewicz > > Acked-by: Paolo Bonzini > > Signed-off-by: Joonsoo Kim > > Need this. We may want to keep the VM_BUG_ON by moving > KVM_CMA_CHUNK_ORDER around. > > diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c > index 8056107..1932e0e 100644 > --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c > +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c > @@ -37,8 +37,6 @@ > #include > #include > > -#include "book3s_hv_cma.h" > - > /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ > #define MAX_LPID_970 63 > > @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) > } > > kvm->arch.hpt_cma_alloc = 0; > - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); > page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); > if (page) { > hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); > > > > -aneesh Okay. So do you also want this? @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) { unsigned long align_pages = HPT_ALIGN_PAGES; + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); + /* Old CPUs require HPT aligned on a multiple of its size */ if (!cpu_has_feature(CPU_FTR_ARCH_206)) align_pages = nr_pages; - return kvm_alloc_cma(nr_pages, align_pages); + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); } Thanks. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f45.google.com (mail-pa0-f45.google.com [209.85.220.45]) by kanga.kvack.org (Postfix) with ESMTP id 1574D6B0038 for ; Mon, 16 Jun 2014 01:29:59 -0400 (EDT) Received: by mail-pa0-f45.google.com with SMTP id rd3so3514091pab.32 for ; Sun, 15 Jun 2014 22:29:58 -0700 (PDT) Received: from lgeamrelo02.lge.com (lgeamrelo02.lge.com. [156.147.1.126]) by mx.google.com with ESMTP id lf13si12383146pab.199.2014.06.15.22.29.56 for ; Sun, 15 Jun 2014 22:29:58 -0700 (PDT) Date: Mon, 16 Jun 2014 14:34:08 +0900 From: Joonsoo Kim Subject: Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework Message-ID: <20140616053408.GH23210@js1304-P5Q-DELUXE> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87mwdfg9co.fsf@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: "Aneesh Kumar K.V" Cc: Andrew Morton , Marek Szyprowski , Michal Nazarewicz , Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > Now, we have general CMA reserved area management framework, > > so use it for future maintainabilty. There is no functional change. > > > > Acked-by: Michal Nazarewicz > > Acked-by: Paolo Bonzini > > Signed-off-by: Joonsoo Kim > > Need this. We may want to keep the VM_BUG_ON by moving > KVM_CMA_CHUNK_ORDER around. > > diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c > index 8056107..1932e0e 100644 > --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c > +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c > @@ -37,8 +37,6 @@ > #include > #include > > -#include "book3s_hv_cma.h" > - > /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ > #define MAX_LPID_970 63 > > @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) > } > > kvm->arch.hpt_cma_alloc = 0; > - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); > page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); > if (page) { > hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); > > > > -aneesh Okay. So do you also want this? @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) { unsigned long align_pages = HPT_ALIGN_PAGES; + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); + /* Old CPUs require HPT aligned on a multiple of its size */ if (!cpu_has_feature(CPU_FTR_ARCH_206)) align_pages = nr_pages; - return kvm_alloc_cma(nr_pages, align_pages); + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); } Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lgeamrelo02.lge.com (lgeamrelo02.lge.com [156.147.1.126]) by lists.ozlabs.org (Postfix) with ESMTP id 100721A0349 for ; Mon, 16 Jun 2014 15:29:57 +1000 (EST) Date: Mon, 16 Jun 2014 14:34:08 +0900 From: Joonsoo Kim To: "Aneesh Kumar K.V" Subject: Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework Message-ID: <20140616053408.GH23210@js1304-P5Q-DELUXE> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii In-Reply-To: <87mwdfg9co.fsf@linux.vnet.ibm.com> Cc: Russell King - ARM Linux , kvm@vger.kernel.org, linux-mm@kvack.org, Gleb Natapov , Greg Kroah-Hartman , Alexander Graf , Michal Nazarewicz , linux-kernel@vger.kernel.org, Minchan Kim , Paul Mackerras , kvm-ppc@vger.kernel.org, Paolo Bonzini , Andrew Morton , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, Marek Szyprowski List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > Now, we have general CMA reserved area management framework, > > so use it for future maintainabilty. There is no functional change. > > > > Acked-by: Michal Nazarewicz > > Acked-by: Paolo Bonzini > > Signed-off-by: Joonsoo Kim > > Need this. We may want to keep the VM_BUG_ON by moving > KVM_CMA_CHUNK_ORDER around. > > diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c > index 8056107..1932e0e 100644 > --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c > +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c > @@ -37,8 +37,6 @@ > #include > #include > > -#include "book3s_hv_cma.h" > - > /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ > #define MAX_LPID_970 63 > > @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) > } > > kvm->arch.hpt_cma_alloc = 0; > - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); > page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); > if (page) { > hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); > > > > -aneesh Okay. So do you also want this? @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) { unsigned long align_pages = HPT_ALIGN_PAGES; + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); + /* Old CPUs require HPT aligned on a multiple of its size */ if (!cpu_has_feature(CPU_FTR_ARCH_206)) align_pages = nr_pages; - return kvm_alloc_cma(nr_pages, align_pages); + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); } Thanks. From mboxrd@z Thu Jan 1 00:00:00 1970 From: iamjoonsoo.kim@lge.com (Joonsoo Kim) Date: Mon, 16 Jun 2014 14:34:08 +0900 Subject: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework In-Reply-To: <87mwdfg9co.fsf@linux.vnet.ibm.com> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> Message-ID: <20140616053408.GH23210@js1304-P5Q-DELUXE> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > Now, we have general CMA reserved area management framework, > > so use it for future maintainabilty. There is no functional change. > > > > Acked-by: Michal Nazarewicz > > Acked-by: Paolo Bonzini > > Signed-off-by: Joonsoo Kim > > Need this. We may want to keep the VM_BUG_ON by moving > KVM_CMA_CHUNK_ORDER around. > > diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c > index 8056107..1932e0e 100644 > --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c > +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c > @@ -37,8 +37,6 @@ > #include > #include > > -#include "book3s_hv_cma.h" > - > /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ > #define MAX_LPID_970 63 > > @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) > } > > kvm->arch.hpt_cma_alloc = 0; > - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); > page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); > if (page) { > hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); > > > > -aneesh Okay. So do you also want this? @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) { unsigned long align_pages = HPT_ALIGN_PAGES; + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); + /* Old CPUs require HPT aligned on a multiple of its size */ if (!cpu_has_feature(CPU_FTR_ARCH_206)) align_pages = nr_pages; - return kvm_alloc_cma(nr_pages, align_pages); + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); } Thanks. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Joonsoo Kim Date: Mon, 16 Jun 2014 05:34:08 +0000 Subject: Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework Message-Id: <20140616053408.GH23210@js1304-P5Q-DELUXE> List-Id: References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-8-git-send-email-iamjoonsoo.kim@lge.com> <87mwdfg9co.fsf@linux.vnet.ibm.com> In-Reply-To: <87mwdfg9co.fsf@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: "Aneesh Kumar K.V" Cc: Andrew Morton , Marek Szyprowski , Michal Nazarewicz , Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > Now, we have general CMA reserved area management framework, > > so use it for future maintainabilty. There is no functional change. > > > > Acked-by: Michal Nazarewicz > > Acked-by: Paolo Bonzini > > Signed-off-by: Joonsoo Kim > > Need this. We may want to keep the VM_BUG_ON by moving > KVM_CMA_CHUNK_ORDER around. > > diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c > index 8056107..1932e0e 100644 > --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c > +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c > @@ -37,8 +37,6 @@ > #include > #include > > -#include "book3s_hv_cma.h" > - > /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */ > #define MAX_LPID_970 63 > > @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) > } > > kvm->arch.hpt_cma_alloc = 0; > - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER); > page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT)); > if (page) { > hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); > > > > -aneesh Okay. So do you also want this? @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages) { unsigned long align_pages = HPT_ALIGN_PAGES; + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT); + /* Old CPUs require HPT aligned on a multiple of its size */ if (!cpu_has_feature(CPU_FTR_ARCH_206)) align_pages = nr_pages; - return kvm_alloc_cma(nr_pages, align_pages); + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages)); } Thanks.