From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933108AbaFLLiV (ORCPT ); Thu, 12 Jun 2014 07:38:21 -0400 Received: from mail-wg0-f50.google.com ([74.125.82.50]:60166 "EHLO mail-wg0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755268AbaFLLiR convert rfc822-to-8bit (ORCPT ); Thu, 12 Jun 2014 07:38:17 -0400 From: Michal Nazarewicz To: Joonsoo Kim , Andrew Morton , "Aneesh Kumar K.V" , Marek Szyprowski Cc: Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Joonsoo Kim Subject: Re: [PATCH v2 09/10] mm, cma: move output param to the end of param list In-Reply-To: <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> Organization: http://mina86.com/ References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> User-Agent: Notmuch/0.17+15~gb65ca8e (http://notmuchmail.org) Emacs/24.4.50.1 (x86_64-unknown-linux-gnu) X-Face: PbkBB1w#)bOqd`iCe"Ds{e+!C7`pkC9a|f)Qo^BMQvy\q5x3?vDQJeN(DS?|-^$uMti[3D*#^_Ts"pU$jBQLq~Ud6iNwAw_r_o_4]|JO?]}P_}Nc&"p#D(ZgUb4uCNPe7~a[DbPG0T~!&c.y$Ur,=N4RT>]dNpd;KFrfMCylc}gc??'U2j,!8%xdD Face: iVBORw0KGgoAAAANSUhEUgAAADAAAAAwBAMAAAClLOS0AAAAJFBMVEWbfGlUPDDHgE57V0jUupKjgIObY0PLrom9mH4dFRK4gmjPs41MxjOgAAACQElEQVQ4jW3TMWvbQBQHcBk1xE6WyALX1069oZBMlq+ouUwpEQQ6uRjttkWP4CmBgGM0BQLBdPFZYPsyFUo6uEtKDQ7oy/U96XR2Ux8ehH/89Z6enqxBcS7Lg81jmSuujrfCZcLI/TYYvbGj+jbgFpHJ/bqQAUISj8iLyu4LuFHJTosxsucO4jSDNE0Hq3hwK/ceQ5sx97b8LcUDsILfk+ovHkOIsMbBfg43VuQ5Ln9YAGCkUdKJoXR9EclFBhixy3EGVz1K6eEkhxCAkeMMnqoAhAKwhoUJkDrCqvbecaYINlFKSRS1i12VKH1XpUd4qxL876EkMcDvHj3s5RBajHHMlA5iK32e0C7VgG0RlzFPvoYHZLRmAC0BmNcBruhkE0KsMsbEc62ZwUJDxWUdMsMhVqovoT96i/DnX/ASvz/6hbCabELLk/6FF/8PNpPCGqcZTGFcBhhAaZZDbQPaAB3+KrWWy2XgbYDNIinkdWAFcCpraDE/knwe5DBqGmgzESl1p2E4MWAz0VUPgYYzmfWb9yS4vCvgsxJriNTHoIBz5YteBvg+VGISQWUqhMiByPIPpygeDBE6elD973xWwKkEiHZAHKjhuPsFnBuArrzxtakRcISv+XMIPl4aGBUJm8Emk7qBYU8IlgNEIpiJhk/No24jHwkKTFHDWfPniR4iw5vJaw2nzSjfq2zffcE/GDjRC2dn0J0XwPAbDL84TvaFCJEU4Oml9pRyEUhR3Cl2t01AoEjRbs0sYugp14/4X5n4pU4EHHnMAAAAAElFTkSuQmCC X-PGP: 50751FF4 X-PGP-FP: AC1F 5F5C D418 88F8 CC84 5858 2060 4012 5075 1FF4 X-Hashcash: 1:20:140612:linuxppc-dev@lists.ozlabs.org::zENM3UGuVZmTWysu:0000000000000000000000000000000007Od X-Hashcash: 1:20:140612:agraf@suse.de::pNTNc09kfaWsnAO1:00000J5e X-Hashcash: 1:20:140612:pbonzini@redhat.com::wkKl9BM8hQ4CtDMo:0000000000000000000000000000000000000000000U9n X-Hashcash: 1:20:140612:gregkh@linuxfoundation.org::SazU3KTuMFBKtIl6:000000000000000000000000000000000000PTP X-Hashcash: 1:20:140612:aneesh.kumar@linux.vnet.ibm.com::/3luuIzCCz+IKFah:0000000000000000000000000000001Zu/ X-Hashcash: 1:20:140612:akpm@linux-foundation.org::gsv7iwkAtCAF5Lvv:0000000000000000000000000000000000002SHa X-Hashcash: 1:20:140612:paulus@samba.org::gH5X7N0MVAumZrIh:02TZ6 X-Hashcash: 1:20:140612:iamjoonsoo.kim@lge.com::W7YCESTNC2059Lli:0000000000000000000000000000000000000002VmW X-Hashcash: 1:20:140612:linux@arm.linux.org.uk::iJkTGscpnQnH5uaw:0000000000000000000000000000000000000003672 X-Hashcash: 1:20:140612:linux-kernel@vger.kernel.org::GlquC4MESNMKBiD3:0000000000000000000000000000000003Z2Q X-Hashcash: 1:20:140612:kvm@vger.kernel.org::WnIBiwRI6h8cV31g:0000000000000000000000000000000000000000003SzA X-Hashcash: 1:20:140612:gleb@kernel.org::R1O4H9dUGVvdTbsP:004CBB X-Hashcash: 1:20:140612:benh@kernel.crashing.org::2Yyl4EgBJNQW50Tf:000000000000000000000000000000000000057rd X-Hashcash: 1:20:140612:minchan@kernel.org::gQv+LaRBSsVXs5hD:00000000000000000000000000000000000000000004ggb X-Hashcash: 1:20:140612:iamjoonsoo.kim@lge.com::21B5qFF5F5ftba3G:0000000000000000000000000000000000000004r2b X-Hashcash: 1:20:140612:kvm-ppc@vger.kernel.org::FOybtOCCyGFE+NAo:000000000000000000000000000000000000005k8o X-Hashcash: 1:20:140612:linux-mm@kvack.org::iZb2Y5J/1k3qMLH6:00000000000000000000000000000000000000000008hQG X-Hashcash: 1:20:140612:m.szyprowski@samsung.com::I2yquB4yOBh+NAJg:0000000000000000000000000000000000000BhOz X-Hashcash: 1:20:140612:linux-arm-kernel@lists.infradead.org::Tes9RPHAE2ZjInyF:0000000000000000000000000A+4P Date: Thu, 12 Jun 2014 13:38:11 +0200 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 12 2014, Joonsoo Kim wrote: > Conventionally, we put output param to the end of param list. > cma_declare_contiguous() doesn't look like that, so change it. Perhaps the function should be changed to return an error-pointer? > Additionally, move down cma_areas reference code to the position > where it is really needed. > > Signed-off-by: Joonsoo Kim Acked-by: Michal Nazarewicz > > diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c > index 28ec226..97613ea 100644 > --- a/arch/powerpc/kvm/book3s_hv_builtin.c > +++ b/arch/powerpc/kvm/book3s_hv_builtin.c > @@ -184,7 +184,7 @@ void __init kvm_cma_reserve(void) > > align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size); > cma_declare_contiguous(selected_size, 0, 0, align_size, > - KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, &kvm_cma, false); > + KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma); > } > } > > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c > index f177f73..bfd4553 100644 > --- a/drivers/base/dma-contiguous.c > +++ b/drivers/base/dma-contiguous.c > @@ -149,7 +149,7 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base, > { > int ret; > > - ret = cma_declare_contiguous(size, base, limit, 0, 0, res_cma, fixed); > + ret = cma_declare_contiguous(size, base, limit, 0, 0, fixed, res_cma); > if (ret) > return ret; > > diff --git a/include/linux/cma.h b/include/linux/cma.h > index e38efe9..e53eead 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -6,7 +6,7 @@ struct cma; > extern int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed); > + bool fixed, struct cma **res_cma); > extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align); > extern bool cma_release(struct cma *cma, struct page *pages, int count); > #endif > diff --git a/mm/cma.c b/mm/cma.c > index 01a0713..22a5b23 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -142,8 +142,8 @@ core_initcall(cma_init_reserved_areas); > * @limit: End address of the reserved memory (optional, 0 for any). > * @alignment: Alignment for the contiguous memory area, should be power of 2 > * @order_per_bit: Order of pages represented by one bit on bitmap. > - * @res_cma: Pointer to store the created cma region. > * @fixed: hint about where to place the reserved area > + * @res_cma: Pointer to store the created cma region. > * > * This function reserves memory from early allocator. It should be > * called by arch specific code once the early allocator (memblock or bootmem) > @@ -156,9 +156,9 @@ core_initcall(cma_init_reserved_areas); > int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed) > + bool fixed, struct cma **res_cma) > { > - struct cma *cma = &cma_areas[cma_area_count]; > + struct cma *cma; > int ret = 0; > > pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", > @@ -214,6 +214,7 @@ int __init cma_declare_contiguous(phys_addr_t size, > * Each reserved area must be initialised later, when more kernel > * subsystems (like slab allocator) are available. > */ > + cma = &cma_areas[cma_area_count]; > cma->base_pfn = PFN_DOWN(base); > cma->count = size >> PAGE_SHIFT; > cma->order_per_bit = order_per_bit; > -- > 1.7.9.5 > -- Best regards, _ _ .o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o ..o | Computer Science, Michał “mina86” Nazarewicz (o o) ooo +------ooO--(_)--Ooo-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michal Nazarewicz Subject: Re: [PATCH v2 09/10] mm, cma: move output param to the end of param list Date: Thu, 12 Jun 2014 13:38:11 +0200 Message-ID: References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Joonsoo Kim To: Joonsoo Kim , Andrew Morton , "Aneesh Kumar K.V" , Marek Szyprowski Return-path: In-Reply-To: <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> Sender: owner-linux-mm@kvack.org List-Id: kvm.vger.kernel.org On Thu, Jun 12 2014, Joonsoo Kim wrote: > Conventionally, we put output param to the end of param list. > cma_declare_contiguous() doesn't look like that, so change it. Perhaps the function should be changed to return an error-pointer? > Additionally, move down cma_areas reference code to the position > where it is really needed. > > Signed-off-by: Joonsoo Kim Acked-by: Michal Nazarewicz > > diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book= 3s_hv_builtin.c > index 28ec226..97613ea 100644 > --- a/arch/powerpc/kvm/book3s_hv_builtin.c > +++ b/arch/powerpc/kvm/book3s_hv_builtin.c > @@ -184,7 +184,7 @@ void __init kvm_cma_reserve(void) >=20=20 > align_size =3D max(kvm_rma_pages << PAGE_SHIFT, align_size); > cma_declare_contiguous(selected_size, 0, 0, align_size, > - KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, &kvm_cma, false); > + KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma); > } > } >=20=20 > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c > index f177f73..bfd4553 100644 > --- a/drivers/base/dma-contiguous.c > +++ b/drivers/base/dma-contiguous.c > @@ -149,7 +149,7 @@ int __init dma_contiguous_reserve_area(phys_addr_t si= ze, phys_addr_t base, > { > int ret; >=20=20 > - ret =3D cma_declare_contiguous(size, base, limit, 0, 0, res_cma, fixed); > + ret =3D cma_declare_contiguous(size, base, limit, 0, 0, fixed, res_cma); > if (ret) > return ret; >=20=20 > diff --git a/include/linux/cma.h b/include/linux/cma.h > index e38efe9..e53eead 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -6,7 +6,7 @@ struct cma; > extern int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed); > + bool fixed, struct cma **res_cma); > extern struct page *cma_alloc(struct cma *cma, int count, unsigned int a= lign); > extern bool cma_release(struct cma *cma, struct page *pages, int count); > #endif > diff --git a/mm/cma.c b/mm/cma.c > index 01a0713..22a5b23 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -142,8 +142,8 @@ core_initcall(cma_init_reserved_areas); > * @limit: End address of the reserved memory (optional, 0 for any). > * @alignment: Alignment for the contiguous memory area, should be power= of 2 > * @order_per_bit: Order of pages represented by one bit on bitmap. > - * @res_cma: Pointer to store the created cma region. > * @fixed: hint about where to place the reserved area > + * @res_cma: Pointer to store the created cma region. > * > * This function reserves memory from early allocator. It should be > * called by arch specific code once the early allocator (memblock or bo= otmem) > @@ -156,9 +156,9 @@ core_initcall(cma_init_reserved_areas); > int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed) > + bool fixed, struct cma **res_cma) > { > - struct cma *cma =3D &cma_areas[cma_area_count]; > + struct cma *cma; > int ret =3D 0; >=20=20 > pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", > @@ -214,6 +214,7 @@ int __init cma_declare_contiguous(phys_addr_t size, > * Each reserved area must be initialised later, when more kernel > * subsystems (like slab allocator) are available. > */ > + cma =3D &cma_areas[cma_area_count]; > cma->base_pfn =3D PFN_DOWN(base); > cma->count =3D size >> PAGE_SHIFT; > cma->order_per_bit =3D order_per_bit; > --=20 > 1.7.9.5 > --=20 Best regards, _ _ .o. | Liege of Serenely Enlightened Majesty of o' \,=3D./ `o ..o | Computer Science, Micha=C5=82 =E2=80=9Cmina86=E2=80=9D Nazarewicz = (o o) ooo +------ooO--(_)--Ooo-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-f175.google.com (mail-we0-f175.google.com [74.125.82.175]) by kanga.kvack.org (Postfix) with ESMTP id 655366B00DE for ; Thu, 12 Jun 2014 07:38:17 -0400 (EDT) Received: by mail-we0-f175.google.com with SMTP id k48so410014wev.20 for ; Thu, 12 Jun 2014 04:38:16 -0700 (PDT) Received: from mail-wg0-x232.google.com (mail-wg0-x232.google.com [2a00:1450:400c:c00::232]) by mx.google.com with ESMTPS id v8si26557146wix.35.2014.06.12.04.38.15 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 12 Jun 2014 04:38:16 -0700 (PDT) Received: by mail-wg0-f50.google.com with SMTP id x13so1093541wgg.21 for ; Thu, 12 Jun 2014 04:38:15 -0700 (PDT) From: Michal Nazarewicz Subject: Re: [PATCH v2 09/10] mm, cma: move output param to the end of param list In-Reply-To: <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> Date: Thu, 12 Jun 2014 13:38:11 +0200 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Sender: owner-linux-mm@kvack.org List-ID: To: Joonsoo Kim , Andrew Morton , "Aneesh Kumar K.V" , Marek Szyprowski Cc: Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org On Thu, Jun 12 2014, Joonsoo Kim wrote: > Conventionally, we put output param to the end of param list. > cma_declare_contiguous() doesn't look like that, so change it. Perhaps the function should be changed to return an error-pointer? > Additionally, move down cma_areas reference code to the position > where it is really needed. > > Signed-off-by: Joonsoo Kim Acked-by: Michal Nazarewicz > > diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book= 3s_hv_builtin.c > index 28ec226..97613ea 100644 > --- a/arch/powerpc/kvm/book3s_hv_builtin.c > +++ b/arch/powerpc/kvm/book3s_hv_builtin.c > @@ -184,7 +184,7 @@ void __init kvm_cma_reserve(void) >=20=20 > align_size =3D max(kvm_rma_pages << PAGE_SHIFT, align_size); > cma_declare_contiguous(selected_size, 0, 0, align_size, > - KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, &kvm_cma, false); > + KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma); > } > } >=20=20 > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c > index f177f73..bfd4553 100644 > --- a/drivers/base/dma-contiguous.c > +++ b/drivers/base/dma-contiguous.c > @@ -149,7 +149,7 @@ int __init dma_contiguous_reserve_area(phys_addr_t si= ze, phys_addr_t base, > { > int ret; >=20=20 > - ret =3D cma_declare_contiguous(size, base, limit, 0, 0, res_cma, fixed); > + ret =3D cma_declare_contiguous(size, base, limit, 0, 0, fixed, res_cma); > if (ret) > return ret; >=20=20 > diff --git a/include/linux/cma.h b/include/linux/cma.h > index e38efe9..e53eead 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -6,7 +6,7 @@ struct cma; > extern int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed); > + bool fixed, struct cma **res_cma); > extern struct page *cma_alloc(struct cma *cma, int count, unsigned int a= lign); > extern bool cma_release(struct cma *cma, struct page *pages, int count); > #endif > diff --git a/mm/cma.c b/mm/cma.c > index 01a0713..22a5b23 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -142,8 +142,8 @@ core_initcall(cma_init_reserved_areas); > * @limit: End address of the reserved memory (optional, 0 for any). > * @alignment: Alignment for the contiguous memory area, should be power= of 2 > * @order_per_bit: Order of pages represented by one bit on bitmap. > - * @res_cma: Pointer to store the created cma region. > * @fixed: hint about where to place the reserved area > + * @res_cma: Pointer to store the created cma region. > * > * This function reserves memory from early allocator. It should be > * called by arch specific code once the early allocator (memblock or bo= otmem) > @@ -156,9 +156,9 @@ core_initcall(cma_init_reserved_areas); > int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed) > + bool fixed, struct cma **res_cma) > { > - struct cma *cma =3D &cma_areas[cma_area_count]; > + struct cma *cma; > int ret =3D 0; >=20=20 > pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", > @@ -214,6 +214,7 @@ int __init cma_declare_contiguous(phys_addr_t size, > * Each reserved area must be initialised later, when more kernel > * subsystems (like slab allocator) are available. > */ > + cma =3D &cma_areas[cma_area_count]; > cma->base_pfn =3D PFN_DOWN(base); > cma->count =3D size >> PAGE_SHIFT; > cma->order_per_bit =3D order_per_bit; > --=20 > 1.7.9.5 > --=20 Best regards, _ _ .o. | Liege of Serenely Enlightened Majesty of o' \,=3D./ `o ..o | Computer Science, Micha=C5=82 =E2=80=9Cmina86=E2=80=9D Nazarewicz = (o o) ooo +------ooO--(_)--Ooo-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-x229.google.com (mail-wi0-x229.google.com [IPv6:2a00:1450:400c:c05::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 020B21A02A7 for ; Thu, 12 Jun 2014 21:38:20 +1000 (EST) Received: by mail-wi0-f169.google.com with SMTP id hi2so562642wib.0 for ; Thu, 12 Jun 2014 04:38:15 -0700 (PDT) Sender: Michal Nazarewicz From: Michal Nazarewicz To: Joonsoo Kim , Andrew Morton , "Aneesh Kumar K.V" , Marek Szyprowski Subject: Re: [PATCH v2 09/10] mm, cma: move output param to the end of param list In-Reply-To: <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> Date: Thu, 12 Jun 2014 13:38:11 +0200 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Cc: Russell King - ARM Linux , kvm@vger.kernel.org, linux-mm@kvack.org, Gleb Natapov , Greg Kroah-Hartman , Alexander Graf , kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, Minchan Kim , Paul Mackerras , Paolo Bonzini , Joonsoo Kim , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On Thu, Jun 12 2014, Joonsoo Kim wrote: > Conventionally, we put output param to the end of param list. > cma_declare_contiguous() doesn't look like that, so change it. Perhaps the function should be changed to return an error-pointer? > Additionally, move down cma_areas reference code to the position > where it is really needed. > > Signed-off-by: Joonsoo Kim Acked-by: Michal Nazarewicz > > diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book= 3s_hv_builtin.c > index 28ec226..97613ea 100644 > --- a/arch/powerpc/kvm/book3s_hv_builtin.c > +++ b/arch/powerpc/kvm/book3s_hv_builtin.c > @@ -184,7 +184,7 @@ void __init kvm_cma_reserve(void) >=20=20 > align_size =3D max(kvm_rma_pages << PAGE_SHIFT, align_size); > cma_declare_contiguous(selected_size, 0, 0, align_size, > - KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, &kvm_cma, false); > + KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma); > } > } >=20=20 > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c > index f177f73..bfd4553 100644 > --- a/drivers/base/dma-contiguous.c > +++ b/drivers/base/dma-contiguous.c > @@ -149,7 +149,7 @@ int __init dma_contiguous_reserve_area(phys_addr_t si= ze, phys_addr_t base, > { > int ret; >=20=20 > - ret =3D cma_declare_contiguous(size, base, limit, 0, 0, res_cma, fixed); > + ret =3D cma_declare_contiguous(size, base, limit, 0, 0, fixed, res_cma); > if (ret) > return ret; >=20=20 > diff --git a/include/linux/cma.h b/include/linux/cma.h > index e38efe9..e53eead 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -6,7 +6,7 @@ struct cma; > extern int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed); > + bool fixed, struct cma **res_cma); > extern struct page *cma_alloc(struct cma *cma, int count, unsigned int a= lign); > extern bool cma_release(struct cma *cma, struct page *pages, int count); > #endif > diff --git a/mm/cma.c b/mm/cma.c > index 01a0713..22a5b23 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -142,8 +142,8 @@ core_initcall(cma_init_reserved_areas); > * @limit: End address of the reserved memory (optional, 0 for any). > * @alignment: Alignment for the contiguous memory area, should be power= of 2 > * @order_per_bit: Order of pages represented by one bit on bitmap. > - * @res_cma: Pointer to store the created cma region. > * @fixed: hint about where to place the reserved area > + * @res_cma: Pointer to store the created cma region. > * > * This function reserves memory from early allocator. It should be > * called by arch specific code once the early allocator (memblock or bo= otmem) > @@ -156,9 +156,9 @@ core_initcall(cma_init_reserved_areas); > int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed) > + bool fixed, struct cma **res_cma) > { > - struct cma *cma =3D &cma_areas[cma_area_count]; > + struct cma *cma; > int ret =3D 0; >=20=20 > pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", > @@ -214,6 +214,7 @@ int __init cma_declare_contiguous(phys_addr_t size, > * Each reserved area must be initialised later, when more kernel > * subsystems (like slab allocator) are available. > */ > + cma =3D &cma_areas[cma_area_count]; > cma->base_pfn =3D PFN_DOWN(base); > cma->count =3D size >> PAGE_SHIFT; > cma->order_per_bit =3D order_per_bit; > --=20 > 1.7.9.5 > --=20 Best regards, _ _ .o. | Liege of Serenely Enlightened Majesty of o' \,=3D./ `o ..o | Computer Science, Micha=C5=82 =E2=80=9Cmina86=E2=80=9D Nazarewicz = (o o) ooo +------ooO--(_)--Ooo-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: mina86@mina86.com (Michal Nazarewicz) Date: Thu, 12 Jun 2014 13:38:11 +0200 Subject: [PATCH v2 09/10] mm, cma: move output param to the end of param list In-Reply-To: <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> Message-ID: To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Thu, Jun 12 2014, Joonsoo Kim wrote: > Conventionally, we put output param to the end of param list. > cma_declare_contiguous() doesn't look like that, so change it. Perhaps the function should be changed to return an error-pointer? > Additionally, move down cma_areas reference code to the position > where it is really needed. > > Signed-off-by: Joonsoo Kim Acked-by: Michal Nazarewicz > > diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c > index 28ec226..97613ea 100644 > --- a/arch/powerpc/kvm/book3s_hv_builtin.c > +++ b/arch/powerpc/kvm/book3s_hv_builtin.c > @@ -184,7 +184,7 @@ void __init kvm_cma_reserve(void) > > align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size); > cma_declare_contiguous(selected_size, 0, 0, align_size, > - KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, &kvm_cma, false); > + KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma); > } > } > > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c > index f177f73..bfd4553 100644 > --- a/drivers/base/dma-contiguous.c > +++ b/drivers/base/dma-contiguous.c > @@ -149,7 +149,7 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base, > { > int ret; > > - ret = cma_declare_contiguous(size, base, limit, 0, 0, res_cma, fixed); > + ret = cma_declare_contiguous(size, base, limit, 0, 0, fixed, res_cma); > if (ret) > return ret; > > diff --git a/include/linux/cma.h b/include/linux/cma.h > index e38efe9..e53eead 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -6,7 +6,7 @@ struct cma; > extern int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed); > + bool fixed, struct cma **res_cma); > extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align); > extern bool cma_release(struct cma *cma, struct page *pages, int count); > #endif > diff --git a/mm/cma.c b/mm/cma.c > index 01a0713..22a5b23 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -142,8 +142,8 @@ core_initcall(cma_init_reserved_areas); > * @limit: End address of the reserved memory (optional, 0 for any). > * @alignment: Alignment for the contiguous memory area, should be power of 2 > * @order_per_bit: Order of pages represented by one bit on bitmap. > - * @res_cma: Pointer to store the created cma region. > * @fixed: hint about where to place the reserved area > + * @res_cma: Pointer to store the created cma region. > * > * This function reserves memory from early allocator. It should be > * called by arch specific code once the early allocator (memblock or bootmem) > @@ -156,9 +156,9 @@ core_initcall(cma_init_reserved_areas); > int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed) > + bool fixed, struct cma **res_cma) > { > - struct cma *cma = &cma_areas[cma_area_count]; > + struct cma *cma; > int ret = 0; > > pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", > @@ -214,6 +214,7 @@ int __init cma_declare_contiguous(phys_addr_t size, > * Each reserved area must be initialised later, when more kernel > * subsystems (like slab allocator) are available. > */ > + cma = &cma_areas[cma_area_count]; > cma->base_pfn = PFN_DOWN(base); > cma->count = size >> PAGE_SHIFT; > cma->order_per_bit = order_per_bit; > -- > 1.7.9.5 > -- Best regards, _ _ .o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o ..o | Computer Science, Micha? ?mina86? Nazarewicz (o o) ooo +------ooO--(_)--Ooo-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michal Nazarewicz Date: Thu, 12 Jun 2014 11:38:11 +0000 Subject: Re: [PATCH v2 09/10] mm, cma: move output param to the end of param list Message-Id: List-Id: References: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com> <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> In-Reply-To: <1402543307-29800-10-git-send-email-iamjoonsoo.kim@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit To: Joonsoo Kim , Andrew Morton , "Aneesh Kumar K.V" , Marek Szyprowski Cc: Minchan Kim , Russell King - ARM Linux , Greg Kroah-Hartman , Paolo Bonzini , Gleb Natapov , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Joonsoo Kim On Thu, Jun 12 2014, Joonsoo Kim wrote: > Conventionally, we put output param to the end of param list. > cma_declare_contiguous() doesn't look like that, so change it. Perhaps the function should be changed to return an error-pointer? > Additionally, move down cma_areas reference code to the position > where it is really needed. > > Signed-off-by: Joonsoo Kim Acked-by: Michal Nazarewicz > > diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c > index 28ec226..97613ea 100644 > --- a/arch/powerpc/kvm/book3s_hv_builtin.c > +++ b/arch/powerpc/kvm/book3s_hv_builtin.c > @@ -184,7 +184,7 @@ void __init kvm_cma_reserve(void) > > align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size); > cma_declare_contiguous(selected_size, 0, 0, align_size, > - KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, &kvm_cma, false); > + KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma); > } > } > > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c > index f177f73..bfd4553 100644 > --- a/drivers/base/dma-contiguous.c > +++ b/drivers/base/dma-contiguous.c > @@ -149,7 +149,7 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base, > { > int ret; > > - ret = cma_declare_contiguous(size, base, limit, 0, 0, res_cma, fixed); > + ret = cma_declare_contiguous(size, base, limit, 0, 0, fixed, res_cma); > if (ret) > return ret; > > diff --git a/include/linux/cma.h b/include/linux/cma.h > index e38efe9..e53eead 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -6,7 +6,7 @@ struct cma; > extern int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed); > + bool fixed, struct cma **res_cma); > extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align); > extern bool cma_release(struct cma *cma, struct page *pages, int count); > #endif > diff --git a/mm/cma.c b/mm/cma.c > index 01a0713..22a5b23 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -142,8 +142,8 @@ core_initcall(cma_init_reserved_areas); > * @limit: End address of the reserved memory (optional, 0 for any). > * @alignment: Alignment for the contiguous memory area, should be power of 2 > * @order_per_bit: Order of pages represented by one bit on bitmap. > - * @res_cma: Pointer to store the created cma region. > * @fixed: hint about where to place the reserved area > + * @res_cma: Pointer to store the created cma region. > * > * This function reserves memory from early allocator. It should be > * called by arch specific code once the early allocator (memblock or bootmem) > @@ -156,9 +156,9 @@ core_initcall(cma_init_reserved_areas); > int __init cma_declare_contiguous(phys_addr_t size, > phys_addr_t base, phys_addr_t limit, > phys_addr_t alignment, int order_per_bit, > - struct cma **res_cma, bool fixed) > + bool fixed, struct cma **res_cma) > { > - struct cma *cma = &cma_areas[cma_area_count]; > + struct cma *cma; > int ret = 0; > > pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", > @@ -214,6 +214,7 @@ int __init cma_declare_contiguous(phys_addr_t size, > * Each reserved area must be initialised later, when more kernel > * subsystems (like slab allocator) are available. > */ > + cma = &cma_areas[cma_area_count]; > cma->base_pfn = PFN_DOWN(base); > cma->count = size >> PAGE_SHIFT; > cma->order_per_bit = order_per_bit; > -- > 1.7.9.5 > -- Best regards, _ _ .o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o ..o | Computer Science, Michał “mina86” Nazarewicz (o o) ooo +------ooO--(_)--Ooo--