From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755198AbcHYEca (ORCPT ); Thu, 25 Aug 2016 00:32:30 -0400 Received: from resqmta-ch2-10v.sys.comcast.net ([69.252.207.42]:40997 "EHLO resqmta-ch2-10v.sys.comcast.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755095AbcHYEc2 (ORCPT ); Thu, 25 Aug 2016 00:32:28 -0400 Date: Wed, 24 Aug 2016 23:10:03 -0500 (CDT) From: Christoph Lameter X-X-Sender: cl@east.gentwo.org To: Andi Kleen cc: Michal Hocko , Joonsoo Kim , Aruna Ramakrishna , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Kravetz , Pekka Enberg , David Rientjes , Andrew Morton , Mel Gorman , Jiri Slaby Subject: Re: what is the purpose of SLAB and SLUB In-Reply-To: <8760qr8orh.fsf@tassilo.jf.intel.com> Message-ID: References: <1471458050-29622-1-git-send-email-aruna.ramakrishna@oracle.com> <20160818115218.GJ30162@dhcp22.suse.cz> <20160823021303.GB17039@js1304-P5Q-DELUXE> <20160823153807.GN23577@dhcp22.suse.cz> <8760qr8orh.fsf@tassilo.jf.intel.com> Content-Type: text/plain; charset=US-ASCII X-CMAE-Envelope: MS4wfO0nE8RYWV3d/dnw7ibmwEIFQLaI9yzBReuog7W5MeRXNhxFFm0q1Pkqqi1qvEgvYSqcB1IIB2MqBBthToXQj2chmgLe8YQLYatkXZLNb/bt/Pu0vGS1 i1ndiopTNxnEvS9qAtXTUMz+/Hkjh63yhRb4C37ChgKTeIBjgsn+r+Mbj3wU5j+lPFyX6l5/chpaWz8/n5J0SFD6KxG9UjIP3ZRKeL1xMUd6QkBhT9iBi1EO ZZRedi89RcQHunnrkJNcLUWD+SAb6dYJAF3lE0L8OvHQv4amW0NwUp02mFV5oiYnTnc5tZk9imGm10zlGZxvr+pVqca760KCG7ZfG6WEP9atrnnojJjDyuO8 cXCrkP9Mgl+6wkyeSgne0g3LcGqPP8RrbPCGI+AMpwWhVsmKOBWyCa9rChGs8LVqR3Qcdfch6mQf8Mw5QOpUNqei6SbzETtB296CxyAWTcmN87wc2ESXeZIT Qg76i+Y2Hmrbk3TU86Z9tkHjp8DxK7qPe3pbtPeH+cxlkg49tlYKlVTCuA8= Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 23 Aug 2016, Andi Kleen wrote: > Why would you stop someone from working on SLAB if they want to? > > Forcibly enforcing a freeze on something can make sense if you're > in charge of a team to conserve resources, but in Linux the situation is > very different. I agree and frankly having multiple allocators is something good. Features that are good in one are copied to the other and enhanced in the process. I think this has driven code development quite a bit. Every allocator has a different basic approach to storage layout and synchronization which determines performance in various usage scenarios. The competition of seeing if the developer that is a fan of one can come up with a way to make performance better or storage use more effective in a situation where another shows better numbers is good. There may be more creative ways of coming up with new ways of laying out storage in the future and I would like to have the flexibility in the kernel to explore those if necessary with additional variations. The more common code we can isolate the easier it will become to just try out a new layout and a new form of serialization to see if it provides advantages.