From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759010Ab2JSOWj (ORCPT ); Fri, 19 Oct 2012 10:22:39 -0400 Received: from mailhub.sw.ru ([195.214.232.25]:46728 "EHLO relay.sw.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758925Ab2JSOWb (ORCPT ); Fri, 19 Oct 2012 10:22:31 -0400 From: Glauber Costa To: Cc: , , Mel Gorman , Tejun Heo , Andrew Morton , Michal Hocko , Johannes Weiner , , Christoph Lameter , David Rientjes , Pekka Enberg , , Glauber Costa , Pekka Enberg , Suleiman Souhlal Subject: [PATCH v5 04/18] slab: don't preemptively remove element from list in cache destroy Date: Fri, 19 Oct 2012 18:20:28 +0400 Message-Id: <1350656442-1523-5-git-send-email-glommer@parallels.com> X-Mailer: git-send-email 1.7.11.7 In-Reply-To: <1350656442-1523-1-git-send-email-glommer@parallels.com> References: <1350656442-1523-1-git-send-email-glommer@parallels.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org After the slab/slub/slob merge, we are deleting the element from the slab_cache lists, and then if the destruction fail, we add it back again. This behavior was present in some caches, but not in others, if my memory doesn't fail me. I, however, see no reason why we need to do so, since we are now locked during the whole deletion (which wasn't necessarily true before). I propose a simplification in which we delete it only when there is no more going back, so we don't need to add it again. Signed-off-by: Glauber Costa CC: Christoph Lameter CC: Pekka Enberg CC: Michal Hocko CC: Kamezawa Hiroyuki CC: Johannes Weiner CC: Suleiman Souhlal CC: Tejun Heo --- mm/slab_common.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 1ee1d6f..bf4b4f1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -174,16 +174,15 @@ void kmem_cache_destroy(struct kmem_cache *s) mutex_lock(&slab_mutex); s->refcount--; if (!s->refcount) { - list_del(&s->list); - if (!__kmem_cache_shutdown(s)) { if (s->flags & SLAB_DESTROY_BY_RCU) rcu_barrier(); + list_del(&s->list); + kfree(s->name); kmem_cache_free(kmem_cache, s); } else { - list_add(&s->list, &slab_caches); printk(KERN_ERR "kmem_cache_destroy %s: Slab cache still has objects\n", s->name); dump_stack(); -- 1.7.11.7 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx140.postini.com [74.125.245.140]) by kanga.kvack.org (Postfix) with SMTP id C8BF36B005D for ; Fri, 19 Oct 2012 10:22:13 -0400 (EDT) From: Glauber Costa Subject: [PATCH v5 04/18] slab: don't preemptively remove element from list in cache destroy Date: Fri, 19 Oct 2012 18:20:28 +0400 Message-Id: <1350656442-1523-5-git-send-email-glommer@parallels.com> In-Reply-To: <1350656442-1523-1-git-send-email-glommer@parallels.com> References: <1350656442-1523-1-git-send-email-glommer@parallels.com> Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Mel Gorman , Tejun Heo , Andrew Morton , Michal Hocko , Johannes Weiner , kamezawa.hiroyu@jp.fujitsu.com, Christoph Lameter , David Rientjes , Pekka Enberg , devel@openvz.org, Glauber Costa , Pekka Enberg , Suleiman Souhlal After the slab/slub/slob merge, we are deleting the element from the slab_cache lists, and then if the destruction fail, we add it back again. This behavior was present in some caches, but not in others, if my memory doesn't fail me. I, however, see no reason why we need to do so, since we are now locked during the whole deletion (which wasn't necessarily true before). I propose a simplification in which we delete it only when there is no more going back, so we don't need to add it again. Signed-off-by: Glauber Costa CC: Christoph Lameter CC: Pekka Enberg CC: Michal Hocko CC: Kamezawa Hiroyuki CC: Johannes Weiner CC: Suleiman Souhlal CC: Tejun Heo --- mm/slab_common.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 1ee1d6f..bf4b4f1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -174,16 +174,15 @@ void kmem_cache_destroy(struct kmem_cache *s) mutex_lock(&slab_mutex); s->refcount--; if (!s->refcount) { - list_del(&s->list); - if (!__kmem_cache_shutdown(s)) { if (s->flags & SLAB_DESTROY_BY_RCU) rcu_barrier(); + list_del(&s->list); + kfree(s->name); kmem_cache_free(kmem_cache, s); } else { - list_add(&s->list, &slab_caches); printk(KERN_ERR "kmem_cache_destroy %s: Slab cache still has objects\n", s->name); dump_stack(); -- 1.7.11.7 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: Glauber Costa Subject: [PATCH v5 04/18] slab: don't preemptively remove element from list in cache destroy Date: Fri, 19 Oct 2012 18:20:28 +0400 Message-ID: <1350656442-1523-5-git-send-email-glommer@parallels.com> References: <1350656442-1523-1-git-send-email-glommer@parallels.com> Return-path: In-Reply-To: <1350656442-1523-1-git-send-email-glommer-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Mel Gorman , Tejun Heo , Andrew Morton , Michal Hocko , Johannes Weiner , kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org, Christoph Lameter , David Rientjes , Pekka Enberg , devel-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org, Glauber Costa , Pekka Enberg , Suleiman Souhlal After the slab/slub/slob merge, we are deleting the element from the slab_cache lists, and then if the destruction fail, we add it back again. This behavior was present in some caches, but not in others, if my memory doesn't fail me. I, however, see no reason why we need to do so, since we are now locked during the whole deletion (which wasn't necessarily true before). I propose a simplification in which we delete it only when there is no more going back, so we don't need to add it again. Signed-off-by: Glauber Costa CC: Christoph Lameter CC: Pekka Enberg CC: Michal Hocko CC: Kamezawa Hiroyuki CC: Johannes Weiner CC: Suleiman Souhlal CC: Tejun Heo --- mm/slab_common.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 1ee1d6f..bf4b4f1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -174,16 +174,15 @@ void kmem_cache_destroy(struct kmem_cache *s) mutex_lock(&slab_mutex); s->refcount--; if (!s->refcount) { - list_del(&s->list); - if (!__kmem_cache_shutdown(s)) { if (s->flags & SLAB_DESTROY_BY_RCU) rcu_barrier(); + list_del(&s->list); + kfree(s->name); kmem_cache_free(kmem_cache, s); } else { - list_add(&s->list, &slab_caches); printk(KERN_ERR "kmem_cache_destroy %s: Slab cache still has objects\n", s->name); dump_stack(); -- 1.7.11.7