All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Sha Zhengju <handai.szj@gmail.com>
Cc: Greg Thelen <gthelen@google.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org, yinghan@google.com,
	akpm@linux-foundation.org, mhocko@suse.cz,
	linux-kernel@vger.kernel.org, Sha Zhengju <handai.szj@taobao.com>
Subject: Re: [PATCH 5/7] memcg: add per cgroup dirty pages accounting
Date: Thu, 19 Jul 2012 15:33:05 +0900	[thread overview]
Message-ID: <5007AA21.3050707@jp.fujitsu.com> (raw)
In-Reply-To: <4FFD4822.4020300@gmail.com>

(2012/07/11 18:32), Sha Zhengju wrote:
> On 07/10/2012 05:02 AM, Greg Thelen wrote:
>> On Thu, Jun 28 2012, Sha Zhengju wrote:
>>
>>> From: Sha Zhengju<handai.szj@taobao.com>
>>>
>>> This patch adds memcg routines to count dirty pages, which allows memory controller
>>> to maintain an accurate view of the amount of its dirty memory and can provide some
>>> info for users while group's direct reclaim is working.
>>>
>>> After Kame's commit 89c06bd5(memcg: use new logic for page stat accounting), we can
>>> use 'struct page' flag to test page state instead of per page_cgroup flag. But memcg
>>> has a feature to move a page from a cgroup to another one and may have race between
>>> "move" and "page stat accounting". So in order to avoid the race we have designed a
>>> bigger lock:
>>>
>>>           mem_cgroup_begin_update_page_stat()
>>>           modify page information    -->(a)
>>>           mem_cgroup_update_page_stat()  -->(b)
>>>           mem_cgroup_end_update_page_stat()
>>>
>>> It requires (a) and (b)(dirty pages accounting) can stay close enough.
>>>
>>> In the previous two prepare patches, we have reworked the vfs set page dirty routines
>>> and now the interfaces are more explicit:
>>>     incrementing (2):
>>>         __set_page_dirty
>>>         __set_page_dirty_nobuffers
>>>     decrementing (2):
>>>         clear_page_dirty_for_io
>>>         cancel_dirty_page
>>>
>>>
>>> Signed-off-by: Sha Zhengju<handai.szj@taobao.com>
>>> ---
>>>   fs/buffer.c                |   17 ++++++++++++++---
>>>   include/linux/memcontrol.h |    1 +
>>>   mm/filemap.c               |    5 +++++
>>>   mm/memcontrol.c            |   28 +++++++++++++++++++++-------
>>>   mm/page-writeback.c        |   30 ++++++++++++++++++++++++------
>>>   mm/truncate.c              |    6 ++++++
>>>   6 files changed, 71 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/fs/buffer.c b/fs/buffer.c
>>> index 55522dd..d3714cc 100644
>>> --- a/fs/buffer.c
>>> +++ b/fs/buffer.c
>>> @@ -613,11 +613,19 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode);
>>>   int __set_page_dirty(struct page *page,
>>>           struct address_space *mapping, int warn)
>>>   {
>>> +    bool locked;
>>> +    unsigned long flags;
>>> +    int ret = 0;
>> '= 0' and 'ret = 0' change (below) are redundant.  My vote is to remove
>> '= 0' here.
>>
>
> Nice catch. :-)
>
>>> +
>>>       if (unlikely(!mapping))
>>>           return !TestSetPageDirty(page);
>>>
>>> -    if (TestSetPageDirty(page))
>>> -        return 0;
>>> +    mem_cgroup_begin_update_page_stat(page,&locked,&flags);
>>> +
>>> +    if (TestSetPageDirty(page)) {
>>> +        ret = 0;
>>> +        goto out;
>>> +    }
>>>
>>>       spin_lock_irq(&mapping->tree_lock);
>>>       if (page->mapping) {    /* Race with truncate? */
>>> @@ -629,7 +637,10 @@ int __set_page_dirty(struct page *page,
>>>       spin_unlock_irq(&mapping->tree_lock);
>>>       __mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
>>>
>>> -    return 1;
>>> +    ret = 1;
>>> +out:
>>> +    mem_cgroup_end_update_page_stat(page,&locked,&flags);
>>> +    return ret;
>>>   }
>>>
>>>   /*
>>> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
>>> index 20b0f2d..ad37b59 100644
>>> --- a/include/linux/memcontrol.h
>>> +++ b/include/linux/memcontrol.h
>>> @@ -38,6 +38,7 @@ enum mem_cgroup_stat_index {
>>>       MEM_CGROUP_STAT_RSS,       /* # of pages charged as anon rss */
>>>       MEM_CGROUP_STAT_FILE_MAPPED,  /* # of pages charged as file rss */
>>>       MEM_CGROUP_STAT_SWAP, /* # of pages, swapped out */
>>> +    MEM_CGROUP_STAT_FILE_DIRTY,  /* # of dirty pages in page cache */
>>>       MEM_CGROUP_STAT_NSTATS,
>>>   };
>>>
>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>> index 1f19ec3..5159a49 100644
>>> --- a/mm/filemap.c
>>> +++ b/mm/filemap.c
>>> @@ -140,6 +140,11 @@ void __delete_from_page_cache(struct page *page)
>>>        * having removed the page entirely.
>>>        */
>>>       if (PageDirty(page)&&  mapping_cap_account_dirty(mapping)) {
>>> +        /*
>>> +         * Do not change page state, so no need to use mem_cgroup_
>>> +         * {begin, end}_update_page_stat to get lock.
>>> +         */
>>> +        mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_DIRTY);
>> I do not understand this comment.  What serializes this function and
>> mem_cgroup_move_account()?
>>
>
> The race is exist just because the two competitors share one
> public variable and one reads it and the other writes it.
> I thought if both sides(accounting and cgroup_move) do not
> change page flag, then risks like doule-counting(see below)
> will not happen.
>
>               CPU-A                                   CPU-B
>          Set PG_dirty
>          (delay)                                move_lock_mem_cgroup()
>                                                     if (PageDirty(page))
> new_memcg->nr_dirty++
>                                                     pc->mem_cgroup = new_memcg;
>                                                     move_unlock_mem_cgroup()
>          move_lock_mem_cgroup()
>          memcg = pc->mem_cgroup
>          new_memcg->nr_dirty++
>
>
> But after second thoughts, it does have problem if without lock:
>
>               CPU-A                                   CPU-B
>          if (PageDirty(page)) {
>                                                     move_lock_mem_cgroup()
> TestClearPageDirty(page))
>                                                              memcg = pc->mem_cgroup
> new_memcg->nr_dirty --
>                                                     move_unlock_mem_cgroup()
>
>          memcg = pc->mem_cgroup
>          new_memcg->nr_dirty--
>          }
>
>
> It may occur race between clear_page_dirty() operation.
> So this time I think we need the lock again...
>
> Kame, what about your opinion...
>
I think Dirty bit is cleared implicitly here...So, having lock will be good.
  
Thanks,
-Kame




WARNING: multiple messages have this Message-ID (diff)
From: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Sha Zhengju <handai.szj@gmail.com>
Cc: Greg Thelen <gthelen@google.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org, yinghan@google.com,
	akpm@linux-foundation.org, mhocko@suse.cz,
	linux-kernel@vger.kernel.org, Sha Zhengju <handai.szj@taobao.com>
Subject: Re: [PATCH 5/7] memcg: add per cgroup dirty pages accounting
Date: Thu, 19 Jul 2012 15:33:05 +0900	[thread overview]
Message-ID: <5007AA21.3050707@jp.fujitsu.com> (raw)
In-Reply-To: <4FFD4822.4020300@gmail.com>

(2012/07/11 18:32), Sha Zhengju wrote:
> On 07/10/2012 05:02 AM, Greg Thelen wrote:
>> On Thu, Jun 28 2012, Sha Zhengju wrote:
>>
>>> From: Sha Zhengju<handai.szj@taobao.com>
>>>
>>> This patch adds memcg routines to count dirty pages, which allows memory controller
>>> to maintain an accurate view of the amount of its dirty memory and can provide some
>>> info for users while group's direct reclaim is working.
>>>
>>> After Kame's commit 89c06bd5(memcg: use new logic for page stat accounting), we can
>>> use 'struct page' flag to test page state instead of per page_cgroup flag. But memcg
>>> has a feature to move a page from a cgroup to another one and may have race between
>>> "move" and "page stat accounting". So in order to avoid the race we have designed a
>>> bigger lock:
>>>
>>>           mem_cgroup_begin_update_page_stat()
>>>           modify page information    -->(a)
>>>           mem_cgroup_update_page_stat()  -->(b)
>>>           mem_cgroup_end_update_page_stat()
>>>
>>> It requires (a) and (b)(dirty pages accounting) can stay close enough.
>>>
>>> In the previous two prepare patches, we have reworked the vfs set page dirty routines
>>> and now the interfaces are more explicit:
>>>     incrementing (2):
>>>         __set_page_dirty
>>>         __set_page_dirty_nobuffers
>>>     decrementing (2):
>>>         clear_page_dirty_for_io
>>>         cancel_dirty_page
>>>
>>>
>>> Signed-off-by: Sha Zhengju<handai.szj@taobao.com>
>>> ---
>>>   fs/buffer.c                |   17 ++++++++++++++---
>>>   include/linux/memcontrol.h |    1 +
>>>   mm/filemap.c               |    5 +++++
>>>   mm/memcontrol.c            |   28 +++++++++++++++++++++-------
>>>   mm/page-writeback.c        |   30 ++++++++++++++++++++++++------
>>>   mm/truncate.c              |    6 ++++++
>>>   6 files changed, 71 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/fs/buffer.c b/fs/buffer.c
>>> index 55522dd..d3714cc 100644
>>> --- a/fs/buffer.c
>>> +++ b/fs/buffer.c
>>> @@ -613,11 +613,19 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode);
>>>   int __set_page_dirty(struct page *page,
>>>           struct address_space *mapping, int warn)
>>>   {
>>> +    bool locked;
>>> +    unsigned long flags;
>>> +    int ret = 0;
>> '= 0' and 'ret = 0' change (below) are redundant.  My vote is to remove
>> '= 0' here.
>>
>
> Nice catch. :-)
>
>>> +
>>>       if (unlikely(!mapping))
>>>           return !TestSetPageDirty(page);
>>>
>>> -    if (TestSetPageDirty(page))
>>> -        return 0;
>>> +    mem_cgroup_begin_update_page_stat(page,&locked,&flags);
>>> +
>>> +    if (TestSetPageDirty(page)) {
>>> +        ret = 0;
>>> +        goto out;
>>> +    }
>>>
>>>       spin_lock_irq(&mapping->tree_lock);
>>>       if (page->mapping) {    /* Race with truncate? */
>>> @@ -629,7 +637,10 @@ int __set_page_dirty(struct page *page,
>>>       spin_unlock_irq(&mapping->tree_lock);
>>>       __mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
>>>
>>> -    return 1;
>>> +    ret = 1;
>>> +out:
>>> +    mem_cgroup_end_update_page_stat(page,&locked,&flags);
>>> +    return ret;
>>>   }
>>>
>>>   /*
>>> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
>>> index 20b0f2d..ad37b59 100644
>>> --- a/include/linux/memcontrol.h
>>> +++ b/include/linux/memcontrol.h
>>> @@ -38,6 +38,7 @@ enum mem_cgroup_stat_index {
>>>       MEM_CGROUP_STAT_RSS,       /* # of pages charged as anon rss */
>>>       MEM_CGROUP_STAT_FILE_MAPPED,  /* # of pages charged as file rss */
>>>       MEM_CGROUP_STAT_SWAP, /* # of pages, swapped out */
>>> +    MEM_CGROUP_STAT_FILE_DIRTY,  /* # of dirty pages in page cache */
>>>       MEM_CGROUP_STAT_NSTATS,
>>>   };
>>>
>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>> index 1f19ec3..5159a49 100644
>>> --- a/mm/filemap.c
>>> +++ b/mm/filemap.c
>>> @@ -140,6 +140,11 @@ void __delete_from_page_cache(struct page *page)
>>>        * having removed the page entirely.
>>>        */
>>>       if (PageDirty(page)&&  mapping_cap_account_dirty(mapping)) {
>>> +        /*
>>> +         * Do not change page state, so no need to use mem_cgroup_
>>> +         * {begin, end}_update_page_stat to get lock.
>>> +         */
>>> +        mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_DIRTY);
>> I do not understand this comment.  What serializes this function and
>> mem_cgroup_move_account()?
>>
>
> The race is exist just because the two competitors share one
> public variable and one reads it and the other writes it.
> I thought if both sides(accounting and cgroup_move) do not
> change page flag, then risks like doule-counting(see below)
> will not happen.
>
>               CPU-A                                   CPU-B
>          Set PG_dirty
>          (delay)                                move_lock_mem_cgroup()
>                                                     if (PageDirty(page))
> new_memcg->nr_dirty++
>                                                     pc->mem_cgroup = new_memcg;
>                                                     move_unlock_mem_cgroup()
>          move_lock_mem_cgroup()
>          memcg = pc->mem_cgroup
>          new_memcg->nr_dirty++
>
>
> But after second thoughts, it does have problem if without lock:
>
>               CPU-A                                   CPU-B
>          if (PageDirty(page)) {
>                                                     move_lock_mem_cgroup()
> TestClearPageDirty(page))
>                                                              memcg = pc->mem_cgroup
> new_memcg->nr_dirty --
>                                                     move_unlock_mem_cgroup()
>
>          memcg = pc->mem_cgroup
>          new_memcg->nr_dirty--
>          }
>
>
> It may occur race between clear_page_dirty() operation.
> So this time I think we need the lock again...
>
> Kame, what about your opinion...
>
I think Dirty bit is cleared implicitly here...So, having lock will be good.
  
Thanks,
-Kame



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Kamezawa Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
To: Sha Zhengju <handai.szj-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Greg Thelen <gthelen-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	yinghan-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	mhocko-AlSwsSmVLrQ@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Sha Zhengju <handai.szj-3b8fjiQLQpfQT0dZR+AlfA@public.gmane.org>
Subject: Re: [PATCH 5/7] memcg: add per cgroup dirty pages accounting
Date: Thu, 19 Jul 2012 15:33:05 +0900	[thread overview]
Message-ID: <5007AA21.3050707@jp.fujitsu.com> (raw)
In-Reply-To: <4FFD4822.4020300-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

(2012/07/11 18:32), Sha Zhengju wrote:
> On 07/10/2012 05:02 AM, Greg Thelen wrote:
>> On Thu, Jun 28 2012, Sha Zhengju wrote:
>>
>>> From: Sha Zhengju<handai.szj-3b8fjiQLQpfQT0dZR+AlfA@public.gmane.org>
>>>
>>> This patch adds memcg routines to count dirty pages, which allows memory controller
>>> to maintain an accurate view of the amount of its dirty memory and can provide some
>>> info for users while group's direct reclaim is working.
>>>
>>> After Kame's commit 89c06bd5(memcg: use new logic for page stat accounting), we can
>>> use 'struct page' flag to test page state instead of per page_cgroup flag. But memcg
>>> has a feature to move a page from a cgroup to another one and may have race between
>>> "move" and "page stat accounting". So in order to avoid the race we have designed a
>>> bigger lock:
>>>
>>>           mem_cgroup_begin_update_page_stat()
>>>           modify page information    -->(a)
>>>           mem_cgroup_update_page_stat()  -->(b)
>>>           mem_cgroup_end_update_page_stat()
>>>
>>> It requires (a) and (b)(dirty pages accounting) can stay close enough.
>>>
>>> In the previous two prepare patches, we have reworked the vfs set page dirty routines
>>> and now the interfaces are more explicit:
>>>     incrementing (2):
>>>         __set_page_dirty
>>>         __set_page_dirty_nobuffers
>>>     decrementing (2):
>>>         clear_page_dirty_for_io
>>>         cancel_dirty_page
>>>
>>>
>>> Signed-off-by: Sha Zhengju<handai.szj-3b8fjiQLQpfQT0dZR+AlfA@public.gmane.org>
>>> ---
>>>   fs/buffer.c                |   17 ++++++++++++++---
>>>   include/linux/memcontrol.h |    1 +
>>>   mm/filemap.c               |    5 +++++
>>>   mm/memcontrol.c            |   28 +++++++++++++++++++++-------
>>>   mm/page-writeback.c        |   30 ++++++++++++++++++++++++------
>>>   mm/truncate.c              |    6 ++++++
>>>   6 files changed, 71 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/fs/buffer.c b/fs/buffer.c
>>> index 55522dd..d3714cc 100644
>>> --- a/fs/buffer.c
>>> +++ b/fs/buffer.c
>>> @@ -613,11 +613,19 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode);
>>>   int __set_page_dirty(struct page *page,
>>>           struct address_space *mapping, int warn)
>>>   {
>>> +    bool locked;
>>> +    unsigned long flags;
>>> +    int ret = 0;
>> '= 0' and 'ret = 0' change (below) are redundant.  My vote is to remove
>> '= 0' here.
>>
>
> Nice catch. :-)
>
>>> +
>>>       if (unlikely(!mapping))
>>>           return !TestSetPageDirty(page);
>>>
>>> -    if (TestSetPageDirty(page))
>>> -        return 0;
>>> +    mem_cgroup_begin_update_page_stat(page,&locked,&flags);
>>> +
>>> +    if (TestSetPageDirty(page)) {
>>> +        ret = 0;
>>> +        goto out;
>>> +    }
>>>
>>>       spin_lock_irq(&mapping->tree_lock);
>>>       if (page->mapping) {    /* Race with truncate? */
>>> @@ -629,7 +637,10 @@ int __set_page_dirty(struct page *page,
>>>       spin_unlock_irq(&mapping->tree_lock);
>>>       __mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
>>>
>>> -    return 1;
>>> +    ret = 1;
>>> +out:
>>> +    mem_cgroup_end_update_page_stat(page,&locked,&flags);
>>> +    return ret;
>>>   }
>>>
>>>   /*
>>> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
>>> index 20b0f2d..ad37b59 100644
>>> --- a/include/linux/memcontrol.h
>>> +++ b/include/linux/memcontrol.h
>>> @@ -38,6 +38,7 @@ enum mem_cgroup_stat_index {
>>>       MEM_CGROUP_STAT_RSS,       /* # of pages charged as anon rss */
>>>       MEM_CGROUP_STAT_FILE_MAPPED,  /* # of pages charged as file rss */
>>>       MEM_CGROUP_STAT_SWAP, /* # of pages, swapped out */
>>> +    MEM_CGROUP_STAT_FILE_DIRTY,  /* # of dirty pages in page cache */
>>>       MEM_CGROUP_STAT_NSTATS,
>>>   };
>>>
>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>> index 1f19ec3..5159a49 100644
>>> --- a/mm/filemap.c
>>> +++ b/mm/filemap.c
>>> @@ -140,6 +140,11 @@ void __delete_from_page_cache(struct page *page)
>>>        * having removed the page entirely.
>>>        */
>>>       if (PageDirty(page)&&  mapping_cap_account_dirty(mapping)) {
>>> +        /*
>>> +         * Do not change page state, so no need to use mem_cgroup_
>>> +         * {begin, end}_update_page_stat to get lock.
>>> +         */
>>> +        mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_DIRTY);
>> I do not understand this comment.  What serializes this function and
>> mem_cgroup_move_account()?
>>
>
> The race is exist just because the two competitors share one
> public variable and one reads it and the other writes it.
> I thought if both sides(accounting and cgroup_move) do not
> change page flag, then risks like doule-counting(see below)
> will not happen.
>
>               CPU-A                                   CPU-B
>          Set PG_dirty
>          (delay)                                move_lock_mem_cgroup()
>                                                     if (PageDirty(page))
> new_memcg->nr_dirty++
>                                                     pc->mem_cgroup = new_memcg;
>                                                     move_unlock_mem_cgroup()
>          move_lock_mem_cgroup()
>          memcg = pc->mem_cgroup
>          new_memcg->nr_dirty++
>
>
> But after second thoughts, it does have problem if without lock:
>
>               CPU-A                                   CPU-B
>          if (PageDirty(page)) {
>                                                     move_lock_mem_cgroup()
> TestClearPageDirty(page))
>                                                              memcg = pc->mem_cgroup
> new_memcg->nr_dirty --
>                                                     move_unlock_mem_cgroup()
>
>          memcg = pc->mem_cgroup
>          new_memcg->nr_dirty--
>          }
>
>
> It may occur race between clear_page_dirty() operation.
> So this time I think we need the lock again...
>
> Kame, what about your opinion...
>
I think Dirty bit is cleared implicitly here...So, having lock will be good.
  
Thanks,
-Kame



  reply	other threads:[~2012-07-19  6:35 UTC|newest]

Thread overview: 132+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-28 10:54 [PATCH 0/7] Per-cgroup page stat accounting Sha Zhengju
2012-06-28 10:54 ` Sha Zhengju
2012-06-28 10:54 ` Sha Zhengju
2012-06-28 10:57 ` [PATCH 1/7] memcg: update cgroup memory document Sha Zhengju
2012-06-28 10:57   ` Sha Zhengju
2012-06-28 10:57   ` Sha Zhengju
2012-07-02  7:00   ` Kamezawa Hiroyuki
2012-07-02  7:00     ` Kamezawa Hiroyuki
2012-07-04 12:47   ` Michal Hocko
2012-07-04 12:47     ` Michal Hocko
2012-07-04 12:47     ` Michal Hocko
2012-07-07 13:45   ` Fengguang Wu
2012-07-07 13:45     ` Fengguang Wu
2012-07-07 13:45     ` Fengguang Wu
2012-06-28 10:58 ` [PATCH 2/7] memcg: remove MEMCG_NR_FILE_MAPPED Sha Zhengju
2012-06-28 10:58   ` Sha Zhengju
2012-07-02 10:44   ` Kamezawa Hiroyuki
2012-07-02 10:44     ` Kamezawa Hiroyuki
2012-07-02 10:44     ` Kamezawa Hiroyuki
2012-07-04 12:56   ` Michal Hocko
2012-07-04 12:56     ` Michal Hocko
2012-07-04 12:58     ` Michal Hocko
2012-07-04 12:58       ` Michal Hocko
2012-07-04 12:58       ` Michal Hocko
2012-07-07 13:48   ` Fengguang Wu
2012-07-07 13:48     ` Fengguang Wu
2012-07-09 21:01   ` Greg Thelen
2012-07-09 21:01     ` Greg Thelen
2012-07-09 21:01     ` Greg Thelen
2012-07-11  8:00     ` Sha Zhengju
2012-07-11  8:00       ` Sha Zhengju
2012-06-28 11:01 ` [PATCH 3/7] Make TestSetPageDirty and dirty page accounting in one func Sha Zhengju
2012-06-28 11:01   ` Sha Zhengju
2012-07-02 11:14   ` Kamezawa Hiroyuki
2012-07-02 11:14     ` Kamezawa Hiroyuki
2012-07-02 11:14     ` Kamezawa Hiroyuki
2012-07-07 14:42     ` Fengguang Wu
2012-07-07 14:42       ` Fengguang Wu
2012-07-04 14:23   ` Michal Hocko
2012-07-04 14:23     ` Michal Hocko
2012-06-28 11:03 ` [PATCH 4/7] Use vfs __set_page_dirty interface instead of doing it inside filesystem Sha Zhengju
2012-06-28 11:03   ` Sha Zhengju
2012-06-28 11:03   ` Sha Zhengju
2012-06-29  5:21   ` Sage Weil
2012-06-29  5:21     ` Sage Weil
2012-06-29  5:21     ` Sage Weil
2012-07-02  8:10     ` Sha Zhengju
2012-07-02  8:10       ` Sha Zhengju
2012-07-02 14:49       ` Sage Weil
2012-07-02 14:49         ` Sage Weil
2012-07-04  8:11         ` Sha Zhengju
2012-07-04  8:11           ` Sha Zhengju
2012-07-05 15:20           ` Sage Weil
2012-07-05 15:20             ` Sage Weil
2012-07-05 15:40             ` Sha Zhengju
2012-07-05 15:40               ` Sha Zhengju
2012-07-04 14:27   ` Michal Hocko
2012-07-04 14:27     ` Michal Hocko
2012-06-28 11:04 ` [PATCH 5/7] memcg: add per cgroup dirty pages accounting Sha Zhengju
2012-06-28 11:04   ` Sha Zhengju
2012-06-28 11:04   ` Sha Zhengju
2012-07-03  5:57   ` Kamezawa Hiroyuki
2012-07-03  5:57     ` Kamezawa Hiroyuki
2012-07-08 14:45     ` Fengguang Wu
2012-07-08 14:45       ` Fengguang Wu
2012-07-04 16:11   ` Michal Hocko
2012-07-04 16:11     ` Michal Hocko
2012-07-04 16:11     ` Michal Hocko
2012-07-09 21:02   ` Greg Thelen
2012-07-09 21:02     ` Greg Thelen
2012-07-11  9:32     ` Sha Zhengju
2012-07-11  9:32       ` Sha Zhengju
2012-07-19  6:33       ` Kamezawa Hiroyuki [this message]
2012-07-19  6:33         ` Kamezawa Hiroyuki
2012-07-19  6:33         ` Kamezawa Hiroyuki
2012-06-28 11:05 ` [PATCH 6/7] memcg: add per cgroup writeback " Sha Zhengju
2012-06-28 11:05   ` Sha Zhengju
2012-07-03  6:31   ` Kamezawa Hiroyuki
2012-07-03  6:31     ` Kamezawa Hiroyuki
2012-07-04  8:24     ` Sha Zhengju
2012-07-04  8:24       ` Sha Zhengju
2012-07-08 14:44     ` Fengguang Wu
2012-07-08 14:44       ` Fengguang Wu
2012-07-08 23:01       ` Johannes Weiner
2012-07-08 23:01         ` Johannes Weiner
2012-07-09  1:37         ` Fengguang Wu
2012-07-09  1:37           ` Fengguang Wu
2012-07-09  1:37           ` Fengguang Wu
2012-07-04 16:15   ` Michal Hocko
2012-07-04 16:15     ` Michal Hocko
2012-06-28 11:06 ` Sha Zhengju
2012-06-28 11:06   ` Sha Zhengju
2012-06-28 11:06   ` Sha Zhengju
2012-07-08 14:53   ` Fengguang Wu
2012-07-08 14:53     ` Fengguang Wu
2012-07-08 14:53     ` Fengguang Wu
2012-07-09  3:36     ` Sha Zhengju
2012-07-09  3:36       ` Sha Zhengju
2012-07-09  3:36       ` Sha Zhengju
2012-07-09  4:14       ` Fengguang Wu
2012-07-09  4:14         ` Fengguang Wu
2012-07-09  4:14         ` Fengguang Wu
2012-07-09  4:18         ` Kamezawa Hiroyuki
2012-07-09  4:18           ` Kamezawa Hiroyuki
2012-07-09  5:22           ` Sha Zhengju
2012-07-09  5:22             ` Sha Zhengju
2012-07-09  5:22             ` Sha Zhengju
2012-07-09  5:28             ` Fengguang Wu
2012-07-09  5:28               ` Fengguang Wu
2012-07-09  5:28               ` Fengguang Wu
2012-07-09  5:19         ` Sha Zhengju
2012-07-09  5:19           ` Sha Zhengju
2012-07-09  5:25           ` Fengguang Wu
2012-07-09  5:25             ` Fengguang Wu
2012-07-09 21:02   ` Greg Thelen
2012-07-09 21:02     ` Greg Thelen
2012-06-28 11:06 ` [PATCH 7/7] memcg: print more detailed info while memcg oom happening Sha Zhengju
2012-06-28 11:06   ` Sha Zhengju
2012-06-28 11:06   ` Sha Zhengju
2012-07-04  8:25   ` Sha Zhengju
2012-07-04  8:25     ` Sha Zhengju
2012-07-04  8:25     ` Sha Zhengju
2012-07-04  8:29   ` Kamezawa Hiroyuki
2012-07-04  8:29     ` Kamezawa Hiroyuki
2012-07-04 11:20     ` Sha Zhengju
2012-07-04 11:20       ` Sha Zhengju
2012-07-04 11:20       ` Sha Zhengju
2012-06-29  8:23 ` [PATCH 0/7] Per-cgroup page stat accounting Kamezawa Hiroyuki
2012-06-29  8:23   ` Kamezawa Hiroyuki
2012-07-02  7:51   ` Sha Zhengju
2012-07-02  7:51     ` Sha Zhengju
2012-07-02  7:51     ` Sha Zhengju

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5007AA21.3050707@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=gthelen@google.com \
    --cc=handai.szj@gmail.com \
    --cc=handai.szj@taobao.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=yinghan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.