All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vladimir Davydov <vdavydov@virtuozzo.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Miller <davem@davemloft.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>, Michal Hocko <mhocko@suse.cz>,
	<netdev@vger.kernel.org>, <linux-mm@kvack.org>,
	<cgroups@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<kernel-team@fb.com>
Subject: Re: [PATCH 13/14] mm: memcontrol: account socket memory in unified hierarchy memory controller
Date: Fri, 20 Nov 2015 16:10:33 +0300	[thread overview]
Message-ID: <20151120131033.GF31308@esperanza> (raw)
In-Reply-To: <1447371693-25143-14-git-send-email-hannes@cmpxchg.org>

On Thu, Nov 12, 2015 at 06:41:32PM -0500, Johannes Weiner wrote:
...
> @@ -5514,16 +5550,43 @@ void sock_release_memcg(struct sock *sk)
>   */
>  bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
>  {
> +	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>  	struct page_counter *counter;
> +	bool force = false;
>  
> -	if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated,
> -				    nr_pages, &counter)) {
> -		memcg->tcp_mem.memory_pressure = 0;
> +#ifdef CONFIG_MEMCG_KMEM
> +	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
> +		if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated,
> +					    nr_pages, &counter)) {
> +			memcg->tcp_mem.memory_pressure = 0;
> +			return true;
> +		}
> +		page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages);
> +		memcg->tcp_mem.memory_pressure = 1;
> +		return false;
> +	}
> +#endif
> +	if (consume_stock(memcg, nr_pages))
>  		return true;
> +retry:
> +	if (page_counter_try_charge(&memcg->memory, batch, &counter))
> +		goto done;
> +
> +	if (batch > nr_pages) {
> +		batch = nr_pages;
> +		goto retry;
>  	}
> -	page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages);
> -	memcg->tcp_mem.memory_pressure = 1;
> -	return false;
> +
> +	page_counter_charge(&memcg->memory, batch);
> +	force = true;
> +done:

> +	css_get_many(&memcg->css, batch);

Is there any point to get css reference per each charged page? For kmem
it is absolutely necessary, because dangling slabs must block
destruction of memcg's kmem caches, which are destroyed on css_free. But
for sockets there's no such problem: memcg will be destroyed only after
all sockets are destroyed and therefore uncharged (since
sock_update_memcg pins css).

> +	if (batch > nr_pages)
> +		refill_stock(memcg, batch - nr_pages);
> +
> +	schedule_work(&memcg->socket_work);

I think it's suboptimal to schedule the work even if we are below the
high threshold.

BTW why do we need this work at all? Why is reclaim_high called from
task_work not enough?

Thanks,
Vladimir

> +
> +	return !force;
>  }
>  
>  /**

WARNING: multiple messages have this Message-ID (diff)
From: Vladimir Davydov <vdavydov-5HdwGun5lf+gSpxsJD1C4w@public.gmane.org>
To: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Cc: David Miller <davem-fT/PcQaiUtIeIZ0/mPfg9Q@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org>,
	<netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	<linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	<cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	<kernel-team-b10kYP2dOMg@public.gmane.org>
Subject: Re: [PATCH 13/14] mm: memcontrol: account socket memory in unified hierarchy memory controller
Date: Fri, 20 Nov 2015 16:10:33 +0300	[thread overview]
Message-ID: <20151120131033.GF31308@esperanza> (raw)
In-Reply-To: <1447371693-25143-14-git-send-email-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>

On Thu, Nov 12, 2015 at 06:41:32PM -0500, Johannes Weiner wrote:
...
> @@ -5514,16 +5550,43 @@ void sock_release_memcg(struct sock *sk)
>   */
>  bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
>  {
> +	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>  	struct page_counter *counter;
> +	bool force = false;
>  
> -	if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated,
> -				    nr_pages, &counter)) {
> -		memcg->tcp_mem.memory_pressure = 0;
> +#ifdef CONFIG_MEMCG_KMEM
> +	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
> +		if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated,
> +					    nr_pages, &counter)) {
> +			memcg->tcp_mem.memory_pressure = 0;
> +			return true;
> +		}
> +		page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages);
> +		memcg->tcp_mem.memory_pressure = 1;
> +		return false;
> +	}
> +#endif
> +	if (consume_stock(memcg, nr_pages))
>  		return true;
> +retry:
> +	if (page_counter_try_charge(&memcg->memory, batch, &counter))
> +		goto done;
> +
> +	if (batch > nr_pages) {
> +		batch = nr_pages;
> +		goto retry;
>  	}
> -	page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages);
> -	memcg->tcp_mem.memory_pressure = 1;
> -	return false;
> +
> +	page_counter_charge(&memcg->memory, batch);
> +	force = true;
> +done:

> +	css_get_many(&memcg->css, batch);

Is there any point to get css reference per each charged page? For kmem
it is absolutely necessary, because dangling slabs must block
destruction of memcg's kmem caches, which are destroyed on css_free. But
for sockets there's no such problem: memcg will be destroyed only after
all sockets are destroyed and therefore uncharged (since
sock_update_memcg pins css).

> +	if (batch > nr_pages)
> +		refill_stock(memcg, batch - nr_pages);
> +
> +	schedule_work(&memcg->socket_work);

I think it's suboptimal to schedule the work even if we are below the
high threshold.

BTW why do we need this work at all? Why is reclaim_high called from
task_work not enough?

Thanks,
Vladimir

> +
> +	return !force;
>  }
>  
>  /**

WARNING: multiple messages have this Message-ID (diff)
From: Vladimir Davydov <vdavydov@virtuozzo.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Miller <davem@davemloft.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>, Michal Hocko <mhocko@suse.cz>,
	netdev@vger.kernel.org, linux-mm@kvack.org,
	cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
	kernel-team@fb.com
Subject: Re: [PATCH 13/14] mm: memcontrol: account socket memory in unified hierarchy memory controller
Date: Fri, 20 Nov 2015 16:10:33 +0300	[thread overview]
Message-ID: <20151120131033.GF31308@esperanza> (raw)
In-Reply-To: <1447371693-25143-14-git-send-email-hannes@cmpxchg.org>

On Thu, Nov 12, 2015 at 06:41:32PM -0500, Johannes Weiner wrote:
...
> @@ -5514,16 +5550,43 @@ void sock_release_memcg(struct sock *sk)
>   */
>  bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
>  {
> +	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>  	struct page_counter *counter;
> +	bool force = false;
>  
> -	if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated,
> -				    nr_pages, &counter)) {
> -		memcg->tcp_mem.memory_pressure = 0;
> +#ifdef CONFIG_MEMCG_KMEM
> +	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
> +		if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated,
> +					    nr_pages, &counter)) {
> +			memcg->tcp_mem.memory_pressure = 0;
> +			return true;
> +		}
> +		page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages);
> +		memcg->tcp_mem.memory_pressure = 1;
> +		return false;
> +	}
> +#endif
> +	if (consume_stock(memcg, nr_pages))
>  		return true;
> +retry:
> +	if (page_counter_try_charge(&memcg->memory, batch, &counter))
> +		goto done;
> +
> +	if (batch > nr_pages) {
> +		batch = nr_pages;
> +		goto retry;
>  	}
> -	page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages);
> -	memcg->tcp_mem.memory_pressure = 1;
> -	return false;
> +
> +	page_counter_charge(&memcg->memory, batch);
> +	force = true;
> +done:

> +	css_get_many(&memcg->css, batch);

Is there any point to get css reference per each charged page? For kmem
it is absolutely necessary, because dangling slabs must block
destruction of memcg's kmem caches, which are destroyed on css_free. But
for sockets there's no such problem: memcg will be destroyed only after
all sockets are destroyed and therefore uncharged (since
sock_update_memcg pins css).

> +	if (batch > nr_pages)
> +		refill_stock(memcg, batch - nr_pages);
> +
> +	schedule_work(&memcg->socket_work);

I think it's suboptimal to schedule the work even if we are below the
high threshold.

BTW why do we need this work at all? Why is reclaim_high called from
task_work not enough?

Thanks,
Vladimir

> +
> +	return !force;
>  }
>  
>  /**

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Vladimir Davydov <vdavydov-5HdwGun5lf+gSpxsJD1C4w@public.gmane.org>
To: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Cc: David Miller <davem-fT/PcQaiUtIeIZ0/mPfg9Q@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org>,
	netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	kernel-team-b10kYP2dOMg@public.gmane.org
Subject: Re: [PATCH 13/14] mm: memcontrol: account socket memory in unified hierarchy memory controller
Date: Fri, 20 Nov 2015 16:10:33 +0300	[thread overview]
Message-ID: <20151120131033.GF31308@esperanza> (raw)
In-Reply-To: <1447371693-25143-14-git-send-email-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>

On Thu, Nov 12, 2015 at 06:41:32PM -0500, Johannes Weiner wrote:
...
> @@ -5514,16 +5550,43 @@ void sock_release_memcg(struct sock *sk)
>   */
>  bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
>  {
> +	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>  	struct page_counter *counter;
> +	bool force = false;
>  
> -	if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated,
> -				    nr_pages, &counter)) {
> -		memcg->tcp_mem.memory_pressure = 0;
> +#ifdef CONFIG_MEMCG_KMEM
> +	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
> +		if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated,
> +					    nr_pages, &counter)) {
> +			memcg->tcp_mem.memory_pressure = 0;
> +			return true;
> +		}
> +		page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages);
> +		memcg->tcp_mem.memory_pressure = 1;
> +		return false;
> +	}
> +#endif
> +	if (consume_stock(memcg, nr_pages))
>  		return true;
> +retry:
> +	if (page_counter_try_charge(&memcg->memory, batch, &counter))
> +		goto done;
> +
> +	if (batch > nr_pages) {
> +		batch = nr_pages;
> +		goto retry;
>  	}
> -	page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages);
> -	memcg->tcp_mem.memory_pressure = 1;
> -	return false;
> +
> +	page_counter_charge(&memcg->memory, batch);
> +	force = true;
> +done:

> +	css_get_many(&memcg->css, batch);

Is there any point to get css reference per each charged page? For kmem
it is absolutely necessary, because dangling slabs must block
destruction of memcg's kmem caches, which are destroyed on css_free. But
for sockets there's no such problem: memcg will be destroyed only after
all sockets are destroyed and therefore uncharged (since
sock_update_memcg pins css).

> +	if (batch > nr_pages)
> +		refill_stock(memcg, batch - nr_pages);
> +
> +	schedule_work(&memcg->socket_work);

I think it's suboptimal to schedule the work even if we are below the
high threshold.

BTW why do we need this work at all? Why is reclaim_high called from
task_work not enough?

Thanks,
Vladimir

> +
> +	return !force;
>  }
>  
>  /**

  parent reply	other threads:[~2015-11-20 13:10 UTC|newest]

Thread overview: 159+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-12 23:41 [PATCH 00/14] mm: memcontrol: account socket memory in unified hierarchy Johannes Weiner
2015-11-12 23:41 ` Johannes Weiner
2015-11-12 23:41 ` [PATCH 01/14] mm: memcontrol: export root_mem_cgroup Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-13 15:59   ` David Miller
2015-11-13 15:59     ` David Miller
2015-11-13 15:59     ` David Miller
2015-11-14 12:17   ` Vladimir Davydov
2015-11-14 12:17     ` Vladimir Davydov
2015-11-14 12:17     ` Vladimir Davydov
2015-11-12 23:41 ` [PATCH 02/14] mm: vmscan: simplify memcg vs. global shrinker invocation Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-13 15:59   ` David Miller
2015-11-13 15:59     ` David Miller
2015-11-14 12:36   ` Vladimir Davydov
2015-11-14 12:36     ` Vladimir Davydov
2015-11-14 12:36     ` Vladimir Davydov
2015-11-14 15:06     ` Johannes Weiner
2015-11-14 15:06       ` Johannes Weiner
2015-11-12 23:41 ` [PATCH 03/14] net: tcp_memcontrol: properly detect ancestor socket pressure Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-13 16:00   ` David Miller
2015-11-13 16:00     ` David Miller
2015-11-14 12:45   ` Vladimir Davydov
2015-11-14 12:45     ` Vladimir Davydov
2015-11-14 12:45     ` Vladimir Davydov
2015-11-14 12:45     ` Vladimir Davydov
2015-11-14 15:15     ` Johannes Weiner
2015-11-14 15:15       ` Johannes Weiner
2015-11-12 23:41 ` [PATCH 04/14] net: tcp_memcontrol: remove bogus hierarchy pressure propagation Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-13 16:00   ` David Miller
2015-11-13 16:00     ` David Miller
2015-11-20  9:07   ` Vladimir Davydov
2015-11-20  9:07     ` Vladimir Davydov
2015-11-20  9:07     ` Vladimir Davydov
2015-11-12 23:41 ` [PATCH 05/14] net: tcp_memcontrol: protect all tcp_memcontrol calls by jump-label Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-13 16:01   ` David Miller
2015-11-13 16:01     ` David Miller
2015-11-13 16:01     ` David Miller
2015-11-14 16:33   ` Vladimir Davydov
2015-11-14 16:33     ` Vladimir Davydov
2015-11-14 16:33     ` Vladimir Davydov
2015-11-16 17:52     ` Johannes Weiner
2015-11-16 17:52       ` Johannes Weiner
2015-11-16 17:52       ` Johannes Weiner
2015-11-12 23:41 ` [PATCH 06/14] net: tcp_memcontrol: remove dead per-memcg count of allocated sockets Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-13 16:01   ` David Miller
2015-11-13 16:01     ` David Miller
2015-11-20  9:48   ` Vladimir Davydov
2015-11-20  9:48     ` Vladimir Davydov
2015-11-20  9:48     ` Vladimir Davydov
2015-11-12 23:41 ` [PATCH 07/14] net: tcp_memcontrol: simplify the per-memcg limit access Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-20  9:51   ` Vladimir Davydov
2015-11-20  9:51     ` Vladimir Davydov
2015-11-20  9:51     ` Vladimir Davydov
2015-11-12 23:41 ` [PATCH 08/14] net: tcp_memcontrol: sanitize tcp memory accounting callbacks Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-13  4:53   ` Eric Dumazet
2015-11-13  4:53     ` Eric Dumazet
2015-11-13  4:53     ` Eric Dumazet
2015-11-13  5:44     ` Johannes Weiner
2015-11-13  5:44       ` Johannes Weiner
2015-11-13  5:44       ` Johannes Weiner
2015-11-13  5:44       ` Johannes Weiner
2015-11-20 10:58   ` Vladimir Davydov
2015-11-20 10:58     ` Vladimir Davydov
2015-11-20 10:58     ` Vladimir Davydov
2015-11-20 18:42     ` Johannes Weiner
2015-11-20 18:42       ` Johannes Weiner
2015-11-20 18:42       ` Johannes Weiner
2015-11-12 23:41 ` [PATCH 09/14] net: tcp_memcontrol: simplify linkage between socket and page counter Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-20 12:42   ` Vladimir Davydov
2015-11-20 12:42     ` Vladimir Davydov
2015-11-20 12:42     ` Vladimir Davydov
2015-11-20 18:56     ` Johannes Weiner
2015-11-20 18:56       ` Johannes Weiner
2015-11-23  9:36       ` Vladimir Davydov
2015-11-23  9:36         ` Vladimir Davydov
2015-11-23  9:36         ` Vladimir Davydov
2015-11-23 18:20         ` Johannes Weiner
2015-11-23 18:20           ` Johannes Weiner
2015-11-24 13:43           ` Vladimir Davydov
2015-11-24 13:43             ` Vladimir Davydov
2015-11-24 13:43             ` Vladimir Davydov
2015-11-24 13:43             ` Vladimir Davydov
2015-11-12 23:41 ` [PATCH 10/14] mm: memcontrol: generalize the socket accounting jump label Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-13 10:43   ` Michal Hocko
2015-11-13 10:43     ` Michal Hocko
2015-11-14 13:29   ` Vladimir Davydov
2015-11-14 13:29     ` Vladimir Davydov
2015-11-14 13:29     ` Vladimir Davydov
2015-11-12 23:41 ` [PATCH 11/14] mm: memcontrol: do not account memory+swap on unified hierarchy Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-13 10:37   ` Michal Hocko
2015-11-13 10:37     ` Michal Hocko
2015-11-14 13:23   ` Vladimir Davydov
2015-11-14 13:23     ` Vladimir Davydov
2015-11-14 13:23     ` Vladimir Davydov
2015-11-12 23:41 ` [PATCH 12/14] mm: memcontrol: move socket code for unified hierarchy accounting Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-20 12:44   ` Vladimir Davydov
2015-11-20 12:44     ` Vladimir Davydov
2015-11-20 12:44     ` Vladimir Davydov
2015-11-12 23:41 ` [PATCH 13/14] mm: memcontrol: account socket memory in unified hierarchy memory controller Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-16 15:59   ` Michal Hocko
2015-11-16 15:59     ` Michal Hocko
2015-11-16 18:18     ` Johannes Weiner
2015-11-16 18:18       ` Johannes Weiner
2015-11-16 18:18       ` Johannes Weiner
2015-11-16 18:18       ` Johannes Weiner
2015-11-18 16:22       ` Michal Hocko
2015-11-18 16:22         ` Michal Hocko
2015-11-18 21:48         ` Johannes Weiner
2015-11-18 21:48           ` Johannes Weiner
2015-11-19 13:50           ` Michal Hocko
2015-11-19 13:50             ` Michal Hocko
2015-11-19 16:52             ` Johannes Weiner
2015-11-19 16:52               ` Johannes Weiner
2015-11-19 16:52               ` Johannes Weiner
2015-11-20 13:10   ` Vladimir Davydov [this message]
2015-11-20 13:10     ` Vladimir Davydov
2015-11-20 13:10     ` Vladimir Davydov
2015-11-20 13:10     ` Vladimir Davydov
2015-11-20 19:25     ` Johannes Weiner
2015-11-20 19:25       ` Johannes Weiner
2015-11-20 19:25       ` Johannes Weiner
2015-11-23 10:00       ` Vladimir Davydov
2015-11-23 10:00         ` Vladimir Davydov
2015-11-23 10:00         ` Vladimir Davydov
2015-11-23 10:00         ` Vladimir Davydov
2015-11-23 19:31         ` Johannes Weiner
2015-11-23 19:31           ` Johannes Weiner
2015-11-12 23:41 ` [PATCH 14/14] mm: memcontrol: hook up vmpressure to socket pressure Johannes Weiner
2015-11-12 23:41   ` Johannes Weiner
2015-11-15 13:54   ` Vladimir Davydov
2015-11-15 13:54     ` Vladimir Davydov
2015-11-15 13:54     ` Vladimir Davydov
2015-11-16 18:53     ` Johannes Weiner
2015-11-16 18:53       ` Johannes Weiner
2015-11-17 20:18       ` Vladimir Davydov
2015-11-17 20:18         ` Vladimir Davydov
2015-11-17 20:18         ` Vladimir Davydov
2015-11-17 20:18         ` Vladimir Davydov
2015-11-17 22:22         ` Johannes Weiner
2015-11-17 22:22           ` Johannes Weiner
2015-11-17 22:22           ` Johannes Weiner
2015-11-18 16:02           ` Vladimir Davydov
2015-11-18 16:02             ` Vladimir Davydov
2015-11-18 16:02             ` Vladimir Davydov
2015-11-18 16:02             ` Vladimir Davydov
2015-11-18 18:27             ` Johannes Weiner
2015-11-18 18:27               ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151120131033.GF31308@esperanza \
    --to=vdavydov@virtuozzo.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=davem@davemloft.net \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=netdev@vger.kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.