linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Glauber Costa <glommer@parallels.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: <linux-mm@kvack.org>, <cgroups@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>, Li Zefan <lizefan@huawei.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Balbir Singh <bsingharora@gmail.com>
Subject: Re: [PATCH v3 3/6] memcg: Simplify mem_cgroup_force_empty_list error handling
Date: Mon, 29 Oct 2012 19:09:28 +0400	[thread overview]
Message-ID: <508E9C28.1000308@parallels.com> (raw)
In-Reply-To: <20121029141534.GB20757@dhcp22.suse.cz>

>>> + * move charges to its parent or the root cgroup if the group has no
>>> + * parent (aka use_hierarchy==0).
>>> + * Although this might fail (get_page_unless_zero, isolate_lru_page or
>>> + * mem_cgroup_move_account fails) the failure is always temporary and
>>> + * it signals a race with a page removal/uncharge or migration. In the
>>> + * first case the page is on the way out and it will vanish from the LRU
>>> + * on the next attempt and the call should be retried later.
>>> + * Isolation from the LRU fails only if page has been isolated from
>>> + * the LRU since we looked at it and that usually means either global
>>> + * reclaim or migration going on. The page will either get back to the
>>> + * LRU or vanish.
>>
>> I just wonder for how long can it go in the worst case?
>  
> That's a good question and to be honest I have no idea. The point is
> that it will terminate eventually and that the group is on the way out
> so the time to complete the removal is not a big deal IMHO. We had
> basically similar situation previously when we would need to repeat
> rmdir loop on EBUSY. The only change is that we do not have to retry
> anymore.
> 
> So the key point is to check whether my assumption about temporarily is
> correct and that we cannot block the rest of the kernel/userspace to
> proceed even though we are waiting for finalization. I believe this is
> true but... (last famous words?)
> 
At least for me, it seems that this will hold.


  reply	other threads:[~2012-10-29 15:09 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-26 11:37 memcg/cgroup: do not fail fail on pre_destroy callbacks Michal Hocko
2012-10-26 11:37 ` [PATCH v3 1/6] memcg: split mem_cgroup_force_empty into reclaiming and reparenting parts Michal Hocko
2012-10-29 13:45   ` Glauber Costa
2012-10-31 16:29   ` Johannes Weiner
2012-10-26 11:37 ` [PATCH v3 2/6] memcg: root_cgroup cannot reach mem_cgroup_move_parent Michal Hocko
2012-10-29 13:48   ` Glauber Costa
2012-10-29 13:52     ` Michal Hocko
2012-10-31 16:31   ` Johannes Weiner
2012-10-26 11:37 ` [PATCH v3 3/6] memcg: Simplify mem_cgroup_force_empty_list error handling Michal Hocko
2012-10-29 13:58   ` Glauber Costa
2012-10-29 14:15     ` Michal Hocko
2012-10-29 15:09       ` Glauber Costa [this message]
2012-10-29 22:00     ` Andrew Morton
2012-10-30 10:35       ` Michal Hocko
2012-10-31 21:30         ` Andrew Morton
2012-11-13 21:10         ` Johannes Weiner
2012-11-14 13:59           ` Michal Hocko
2012-11-14 18:33             ` Johannes Weiner
2012-10-26 11:37 ` [PATCH v3 4/6] cgroups: forbid pre_destroy callback to fail Michal Hocko
2012-10-29 14:04   ` Glauber Costa
2012-10-29 14:06     ` Glauber Costa
2012-10-29 14:17       ` Michal Hocko
2012-11-13 21:13   ` Johannes Weiner
2012-10-26 11:37 ` [PATCH v3 5/6] memcg: make mem_cgroup_reparent_charges non failing Michal Hocko
2012-10-29 14:07   ` Glauber Costa
2012-10-26 11:37 ` [PATCH v3 6/6] hugetlb: do not fail in hugetlb_cgroup_pre_destroy Michal Hocko
2012-10-29 14:08   ` Glauber Costa
2012-10-29 23:26 ` memcg/cgroup: do not fail fail on pre_destroy callbacks Tejun Heo
2012-10-30 23:37   ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=508E9C28.1000308@parallels.com \
    --to=glommer@parallels.com \
    --cc=akpm@linux-foundation.org \
    --cc=bsingharora@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan@huawei.com \
    --cc=mhocko@suse.cz \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).