From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753067AbbCXNXV (ORCPT ); Tue, 24 Mar 2015 09:23:21 -0400 Received: from shonan.sfc.wide.ad.jp ([203.178.142.130]:60035 "EHLO mail.sfc.wide.ad.jp" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752590AbbCXNVH (ORCPT ); Tue, 24 Mar 2015 09:21:07 -0400 From: Hajime Tazaki To: linux-arch@vger.kernel.org Cc: Hajime Tazaki , Arnd Bergmann , Jonathan Corbet , Jhristoph Lameter , Jekka Enberg , Javid Rientjes , Joonsoo Kim , Jndrew Morton , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, netdev@vger.kernel.org, linux-mm@kvack.org, Jeff Dike , Richard Weinberger , Rusty Russell , Mathieu Lacage Subject: [RFC PATCH 02/11] slab: add private memory allocator header for arch/lib Date: Tue, 24 Mar 2015 22:10:33 +0900 Message-Id: <1427202642-1716-3-git-send-email-tazaki@sfc.wide.ad.jp> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1427202642-1716-1-git-send-email-tazaki@sfc.wide.ad.jp> References: <1427202642-1716-1-git-send-email-tazaki@sfc.wide.ad.jp> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org add header includion for CONFIG_LIB to wrap kmalloc and co. This will bring malloc(3) based allocator used by arch/lib. Signed-off-by: Hajime Tazaki --- include/linux/slab.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index 9a139b6..6914e1f 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -205,6 +205,14 @@ size_t ksize(const void *); #endif #endif +#ifdef CONFIG_LIB +#define KMALLOC_SHIFT_MAX 30 +#define KMALLOC_SHIFT_HIGH PAGE_SHIFT +#ifndef KMALLOC_SHIFT_LOW +#define KMALLOC_SHIFT_LOW 3 +#endif +#endif + /* Maximum allocatable size */ #define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_MAX) /* Maximum size for which we actually use a slab cache */ @@ -350,6 +358,9 @@ kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) } #endif +#ifdef CONFIG_LIB +#include +#else static __always_inline void *kmalloc_large(size_t size, gfp_t flags) { unsigned int order = get_order(size); @@ -428,6 +439,7 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags) } return __kmalloc(size, flags); } +#endif /* * Determine size used for the nth kmalloc cache. -- 2.1.0