All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Glauber Costa <glommer@parallels.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Tejun Heo <tj@kernel.org>, Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>
Subject: Re: [PATCH 5/7] memcg: get rid of once-per-second cache shrinking for dead memcgs
Date: Thu, 15 Nov 2012 18:41:28 +0900	[thread overview]
Message-ID: <50A4B8C8.6020202@jp.fujitsu.com> (raw)
In-Reply-To: <1352948093-2315-6-git-send-email-glommer@parallels.com>

(2012/11/15 11:54), Glauber Costa wrote:
> The idea is to synchronously do it, leaving it up to the shrinking
> facilities in vmscan.c and/or others. Not actively retrying shrinking
> may leave the caches alive for more time, but it will remove the ugly
> wakeups. One would argue that if the caches have free objects but are
> not being shrunk, it is because we don't need that memory yet.
> 
> Signed-off-by: Glauber Costa <glommer@parallels.com>
> CC: Michal Hocko <mhocko@suse.cz>
> CC: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> CC: Johannes Weiner <hannes@cmpxchg.org>
> CC: Andrew Morton <akpm@linux-foundation.org>

I agree this patch but can we have a way to see the number of unaccounted
zombie cache usage for debugging ?

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

> ---
>   include/linux/slab.h |  2 +-
>   mm/memcontrol.c      | 17 +++++++----------
>   2 files changed, 8 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 18f8c98..456c327 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -214,7 +214,7 @@ struct memcg_cache_params {
>   			struct kmem_cache *root_cache;
>   			bool dead;
>   			atomic_t nr_pages;
> -			struct delayed_work destroy;
> +			struct work_struct destroy;
>   		};
>   	};
>   };
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index f9c5981..e3d805f 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3077,9 +3077,8 @@ static void kmem_cache_destroy_work_func(struct work_struct *w)
>   {
>   	struct kmem_cache *cachep;
>   	struct memcg_cache_params *p;
> -	struct delayed_work *dw = to_delayed_work(w);
>   
> -	p = container_of(dw, struct memcg_cache_params, destroy);
> +	p = container_of(w, struct memcg_cache_params, destroy);
>   
>   	cachep = memcg_params_to_cache(p);
>   
> @@ -3103,8 +3102,6 @@ static void kmem_cache_destroy_work_func(struct work_struct *w)
>   		kmem_cache_shrink(cachep);
>   		if (atomic_read(&cachep->memcg_params->nr_pages) == 0)
>   			return;
> -		/* Once per minute should be good enough. */
> -		schedule_delayed_work(&cachep->memcg_params->destroy, 60 * HZ);
>   	} else
>   		kmem_cache_destroy(cachep);
>   }
> @@ -3127,18 +3124,18 @@ void mem_cgroup_destroy_cache(struct kmem_cache *cachep)
>   	 * kmem_cache_shrink is enough to shake all the remaining objects and
>   	 * get the page count to 0. In this case, we'll deadlock if we try to
>   	 * cancel the work (the worker runs with an internal lock held, which
> -	 * is the same lock we would hold for cancel_delayed_work_sync().)
> +	 * is the same lock we would hold for cancel_work_sync().)
>   	 *
>   	 * Since we can't possibly know who got us here, just refrain from
>   	 * running if there is already work pending
>   	 */
> -	if (delayed_work_pending(&cachep->memcg_params->destroy))
> +	if (work_pending(&cachep->memcg_params->destroy))
>   		return;
>   	/*
>   	 * We have to defer the actual destroying to a workqueue, because
>   	 * we might currently be in a context that cannot sleep.
>   	 */
> -	schedule_delayed_work(&cachep->memcg_params->destroy, 0);
> +	schedule_work(&cachep->memcg_params->destroy);
>   }
>   
>   static char *memcg_cache_name(struct mem_cgroup *memcg, struct kmem_cache *s)
> @@ -3261,7 +3258,7 @@ void kmem_cache_destroy_memcg_children(struct kmem_cache *s)
>   		 * set, so flip it down to guarantee we are in control.
>   		 */
>   		c->memcg_params->dead = false;
> -		cancel_delayed_work_sync(&c->memcg_params->destroy);
> +		cancel_work_sync(&c->memcg_params->destroy);
>   		kmem_cache_destroy(c);
>   	}
>   	mutex_unlock(&set_limit_mutex);
> @@ -3285,9 +3282,9 @@ static void mem_cgroup_destroy_all_caches(struct mem_cgroup *memcg)
>   	list_for_each_entry(params, &memcg->memcg_slab_caches, list) {
>   		cachep = memcg_params_to_cache(params);
>   		cachep->memcg_params->dead = true;
> -		INIT_DELAYED_WORK(&cachep->memcg_params->destroy,
> +		INIT_WORK(&cachep->memcg_params->destroy,
>   				  kmem_cache_destroy_work_func);
> -		schedule_delayed_work(&cachep->memcg_params->destroy, 0);
> +		schedule_work(&cachep->memcg_params->destroy);
>   	}
>   	mutex_unlock(&memcg->slab_caches_mutex);
>   }
> 



WARNING: multiple messages have this Message-ID (diff)
From: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Glauber Costa <glommer@parallels.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Tejun Heo <tj@kernel.org>, Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>
Subject: Re: [PATCH 5/7] memcg: get rid of once-per-second cache shrinking for dead memcgs
Date: Thu, 15 Nov 2012 18:41:28 +0900	[thread overview]
Message-ID: <50A4B8C8.6020202@jp.fujitsu.com> (raw)
In-Reply-To: <1352948093-2315-6-git-send-email-glommer@parallels.com>

(2012/11/15 11:54), Glauber Costa wrote:
> The idea is to synchronously do it, leaving it up to the shrinking
> facilities in vmscan.c and/or others. Not actively retrying shrinking
> may leave the caches alive for more time, but it will remove the ugly
> wakeups. One would argue that if the caches have free objects but are
> not being shrunk, it is because we don't need that memory yet.
> 
> Signed-off-by: Glauber Costa <glommer@parallels.com>
> CC: Michal Hocko <mhocko@suse.cz>
> CC: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> CC: Johannes Weiner <hannes@cmpxchg.org>
> CC: Andrew Morton <akpm@linux-foundation.org>

I agree this patch but can we have a way to see the number of unaccounted
zombie cache usage for debugging ?

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

> ---
>   include/linux/slab.h |  2 +-
>   mm/memcontrol.c      | 17 +++++++----------
>   2 files changed, 8 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 18f8c98..456c327 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -214,7 +214,7 @@ struct memcg_cache_params {
>   			struct kmem_cache *root_cache;
>   			bool dead;
>   			atomic_t nr_pages;
> -			struct delayed_work destroy;
> +			struct work_struct destroy;
>   		};
>   	};
>   };
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index f9c5981..e3d805f 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3077,9 +3077,8 @@ static void kmem_cache_destroy_work_func(struct work_struct *w)
>   {
>   	struct kmem_cache *cachep;
>   	struct memcg_cache_params *p;
> -	struct delayed_work *dw = to_delayed_work(w);
>   
> -	p = container_of(dw, struct memcg_cache_params, destroy);
> +	p = container_of(w, struct memcg_cache_params, destroy);
>   
>   	cachep = memcg_params_to_cache(p);
>   
> @@ -3103,8 +3102,6 @@ static void kmem_cache_destroy_work_func(struct work_struct *w)
>   		kmem_cache_shrink(cachep);
>   		if (atomic_read(&cachep->memcg_params->nr_pages) == 0)
>   			return;
> -		/* Once per minute should be good enough. */
> -		schedule_delayed_work(&cachep->memcg_params->destroy, 60 * HZ);
>   	} else
>   		kmem_cache_destroy(cachep);
>   }
> @@ -3127,18 +3124,18 @@ void mem_cgroup_destroy_cache(struct kmem_cache *cachep)
>   	 * kmem_cache_shrink is enough to shake all the remaining objects and
>   	 * get the page count to 0. In this case, we'll deadlock if we try to
>   	 * cancel the work (the worker runs with an internal lock held, which
> -	 * is the same lock we would hold for cancel_delayed_work_sync().)
> +	 * is the same lock we would hold for cancel_work_sync().)
>   	 *
>   	 * Since we can't possibly know who got us here, just refrain from
>   	 * running if there is already work pending
>   	 */
> -	if (delayed_work_pending(&cachep->memcg_params->destroy))
> +	if (work_pending(&cachep->memcg_params->destroy))
>   		return;
>   	/*
>   	 * We have to defer the actual destroying to a workqueue, because
>   	 * we might currently be in a context that cannot sleep.
>   	 */
> -	schedule_delayed_work(&cachep->memcg_params->destroy, 0);
> +	schedule_work(&cachep->memcg_params->destroy);
>   }
>   
>   static char *memcg_cache_name(struct mem_cgroup *memcg, struct kmem_cache *s)
> @@ -3261,7 +3258,7 @@ void kmem_cache_destroy_memcg_children(struct kmem_cache *s)
>   		 * set, so flip it down to guarantee we are in control.
>   		 */
>   		c->memcg_params->dead = false;
> -		cancel_delayed_work_sync(&c->memcg_params->destroy);
> +		cancel_work_sync(&c->memcg_params->destroy);
>   		kmem_cache_destroy(c);
>   	}
>   	mutex_unlock(&set_limit_mutex);
> @@ -3285,9 +3282,9 @@ static void mem_cgroup_destroy_all_caches(struct mem_cgroup *memcg)
>   	list_for_each_entry(params, &memcg->memcg_slab_caches, list) {
>   		cachep = memcg_params_to_cache(params);
>   		cachep->memcg_params->dead = true;
> -		INIT_DELAYED_WORK(&cachep->memcg_params->destroy,
> +		INIT_WORK(&cachep->memcg_params->destroy,
>   				  kmem_cache_destroy_work_func);
> -		schedule_delayed_work(&cachep->memcg_params->destroy, 0);
> +		schedule_work(&cachep->memcg_params->destroy);
>   	}
>   	mutex_unlock(&memcg->slab_caches_mutex);
>   }
> 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-11-15  9:41 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-15  2:54 [PATCH 0/7] fixups for kmemcg Glauber Costa
2012-11-15  2:54 ` Glauber Costa
2012-11-15  0:47 ` David Rientjes
2012-11-15  0:47   ` David Rientjes
2012-11-15  2:54 ` [PATCH 1/7] memcg: simplify ida initialization Glauber Costa
2012-11-15  2:54   ` Glauber Costa
2012-11-15  2:54 ` [PATCH 2/7] move include of workqueue.h to top of slab.h file Glauber Costa
2012-11-15  2:54   ` Glauber Costa
2012-11-15  9:30   ` Kamezawa Hiroyuki
2012-11-15  9:30     ` Kamezawa Hiroyuki
2012-11-15  2:54 ` [PATCH 3/7] memcg: remove test for current->mm in memcg_stop/resume_kmem_account Glauber Costa
2012-11-15  2:54   ` Glauber Costa
2012-11-15  9:28   ` Kamezawa Hiroyuki
2012-11-15  9:28     ` Kamezawa Hiroyuki
2012-11-15  2:54 ` [PATCH 4/7] memcg: replace __always_inline with plain inline Glauber Costa
2012-11-15  2:54   ` Glauber Costa
2012-11-15  9:29   ` Kamezawa Hiroyuki
2012-11-15  9:29     ` Kamezawa Hiroyuki
2012-11-15  2:54 ` [PATCH 5/7] memcg: get rid of once-per-second cache shrinking for dead memcgs Glauber Costa
2012-11-15  2:54   ` Glauber Costa
2012-11-15  9:41   ` Kamezawa Hiroyuki [this message]
2012-11-15  9:41     ` Kamezawa Hiroyuki
2012-11-15 13:47     ` Glauber Costa
2012-11-15 13:47       ` Glauber Costa
2012-11-16  5:07       ` Kamezawa Hiroyuki
2012-11-16  5:07         ` Kamezawa Hiroyuki
2012-11-16  7:11         ` Glauber Costa
2012-11-16  7:11           ` Glauber Costa
2012-11-16  7:21           ` Kamezawa Hiroyuki
2012-11-16  7:21             ` Kamezawa Hiroyuki
2012-11-16 14:55             ` Michal Hocko
2012-11-16 14:55               ` Michal Hocko
2012-11-16 15:50               ` Glauber Costa
2012-11-16 15:50                 ` Glauber Costa
2012-11-15  2:54 ` [PATCH 6/7] memcg: add comments clarifying aspects of cache attribute propagation Glauber Costa
2012-11-15  2:54   ` Glauber Costa
2012-11-15  2:54 ` [PATCH 7/7] slub: drop mutex before deleting sysfs entry Glauber Costa
2012-11-15  2:54   ` Glauber Costa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50A4B8C8.6020202@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=glommer@parallels.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=penberg@kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.