All of lore.kernel.org
 help / color / mirror / Atom feed
* per-cpu statistics
@ 2013-02-28  7:59 Glauber Costa
  2013-03-01 13:48   ` Sha Zhengju
  2013-03-04  0:55   ` Kamezawa Hiroyuki
  0 siblings, 2 replies; 10+ messages in thread
From: Glauber Costa @ 2013-02-28  7:59 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki, Michal Hocko, linux-mm, Johannes Weiner,
	Tejun Heo, Cgroups, Mel Gorman

Hi guys

Please enlighten me regarding some historic aspect of memcg before I go
changing something I shouldn't...

Regarding memcg stats, is there any reason for us to use the current
per-cpu implementation we have instead of a percpu_counter?

We are doing something like this:

        get_online_cpus();
        for_each_online_cpu(cpu)
                val += per_cpu(memcg->stat->count[idx], cpu);
#ifdef CONFIG_HOTPLUG_CPU
        spin_lock(&memcg->pcp_counter_lock);
        val += memcg->nocpu_base.count[idx];
        spin_unlock(&memcg->pcp_counter_lock);
#endif
        put_online_cpus();

It seems to me that we are just re-implementing whatever percpu_counters
already do, handling the complication ourselves.

It surely is an array, and this keeps the fields together. But does it
really matter? Did it come from some measurable result?

I wouldn't touch it if it wouldn't be bothering me. But the reason I
ask, is that I am resurrecting the patches to bypass the root cgroup
charges when it is the only group in the system. For that, I would like
to transfer charges from global, to our memcg equivalents.

Things like MM_ANONPAGES are not percpu, though, and when I add it to
the memcg percpu structures, I would have to somehow distribute them
around. When we uncharge, that can become negative.

percpu_counters already handle all that, and then can cope well with
temporary negative charges in the percpu data, that is later on
withdrawn from the main base counter.

We are counting pages, so the fact that we're restricted to only half of
the 64-bit range in percpu counters doesn't seem to be that much of a
problem.

If this is just a historic leftover, I can replace them all with
percpu_counters. Any words on that ?



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: per-cpu statistics
@ 2013-03-01 13:48   ` Sha Zhengju
  0 siblings, 0 replies; 10+ messages in thread
From: Sha Zhengju @ 2013-03-01 13:48 UTC (permalink / raw)
  To: Glauber Costa
  Cc: KAMEZAWA Hiroyuki, Michal Hocko, linux-mm, Johannes Weiner,
	Tejun Heo, Cgroups, Mel Gorman

Hi Glauber,

Forgive me, I'm replying not because I know the reason of current
per-cpu implementation but that I notice you're mentioning something
I'm also interested in. Below is the detail.

On Thu, Feb 28, 2013 at 3:59 PM, Glauber Costa <glommer@parallels.com> wrote:
> Hi guys
>
> Please enlighten me regarding some historic aspect of memcg before I go
> changing something I shouldn't...
>
> Regarding memcg stats, is there any reason for us to use the current
> per-cpu implementation we have instead of a percpu_counter?
>
> We are doing something like this:
>
>         get_online_cpus();
>         for_each_online_cpu(cpu)
>                 val += per_cpu(memcg->stat->count[idx], cpu);
> #ifdef CONFIG_HOTPLUG_CPU
>         spin_lock(&memcg->pcp_counter_lock);
>         val += memcg->nocpu_base.count[idx];
>         spin_unlock(&memcg->pcp_counter_lock);
> #endif
>         put_online_cpus();
>
> It seems to me that we are just re-implementing whatever percpu_counters
> already do, handling the complication ourselves.
>
> It surely is an array, and this keeps the fields together. But does it
> really matter? Did it come from some measurable result?
>
> I wouldn't touch it if it wouldn't be bothering me. But the reason I
> ask, is that I am resurrecting the patches to bypass the root cgroup
> charges when it is the only group in the system. For that, I would like
> to transfer charges from global, to our memcg equivalents.

I'm not sure I fully understand your points, root memcg now don't
charge page already and only do some page stat
accounting(CACHE/RSS/SWAP).  Now I'm also trying to do some
optimization specific to the overhead of root memcg stat accounting,
and the first attempt is posted here:
https://lkml.org/lkml/2013/1/2/71 . But it only covered
FILE_MAPPED/DIRTY/WRITEBACK(I've add the last two accounting in that
patchset) and Michal Hock accepted the approach (so did Kame) and
suggested I should handle all the stats in the same way including
CACHE/RSS. But I do not handle things related to memcg LRU where I
notice you have done some work.

It's possible that we may take different ways to bypass root memcg
stat accounting. The next round of the part will be sent out in
following few days(doing some tests now), and for myself any comments
and collaboration are welcome. (Glad to cc to you of course if you're
also interest in it. :) )

Many thanks!


> Things like MM_ANONPAGES are not percpu, though, and when I add it to
> the memcg percpu structures, I would have to somehow distribute them
> around. When we uncharge, that can become negative.
>
> percpu_counters already handle all that, and then can cope well with
> temporary negative charges in the percpu data, that is later on
> withdrawn from the main base counter.
>
> We are counting pages, so the fact that we're restricted to only half of
> the 64-bit range in percpu counters doesn't seem to be that much of a
> problem.
>
> If this is just a historic leftover, I can replace them all with
> percpu_counters. Any words on that ?
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe cgroups" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Thanks,
Sha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: per-cpu statistics
@ 2013-03-01 13:48   ` Sha Zhengju
  0 siblings, 0 replies; 10+ messages in thread
From: Sha Zhengju @ 2013-03-01 13:48 UTC (permalink / raw)
  To: Glauber Costa
  Cc: KAMEZAWA Hiroyuki, Michal Hocko, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	Johannes Weiner, Tejun Heo, Cgroups, Mel Gorman

Hi Glauber,

Forgive me, I'm replying not because I know the reason of current
per-cpu implementation but that I notice you're mentioning something
I'm also interested in. Below is the detail.

On Thu, Feb 28, 2013 at 3:59 PM, Glauber Costa <glommer-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org> wrote:
> Hi guys
>
> Please enlighten me regarding some historic aspect of memcg before I go
> changing something I shouldn't...
>
> Regarding memcg stats, is there any reason for us to use the current
> per-cpu implementation we have instead of a percpu_counter?
>
> We are doing something like this:
>
>         get_online_cpus();
>         for_each_online_cpu(cpu)
>                 val += per_cpu(memcg->stat->count[idx], cpu);
> #ifdef CONFIG_HOTPLUG_CPU
>         spin_lock(&memcg->pcp_counter_lock);
>         val += memcg->nocpu_base.count[idx];
>         spin_unlock(&memcg->pcp_counter_lock);
> #endif
>         put_online_cpus();
>
> It seems to me that we are just re-implementing whatever percpu_counters
> already do, handling the complication ourselves.
>
> It surely is an array, and this keeps the fields together. But does it
> really matter? Did it come from some measurable result?
>
> I wouldn't touch it if it wouldn't be bothering me. But the reason I
> ask, is that I am resurrecting the patches to bypass the root cgroup
> charges when it is the only group in the system. For that, I would like
> to transfer charges from global, to our memcg equivalents.

I'm not sure I fully understand your points, root memcg now don't
charge page already and only do some page stat
accounting(CACHE/RSS/SWAP).  Now I'm also trying to do some
optimization specific to the overhead of root memcg stat accounting,
and the first attempt is posted here:
https://lkml.org/lkml/2013/1/2/71 . But it only covered
FILE_MAPPED/DIRTY/WRITEBACK(I've add the last two accounting in that
patchset) and Michal Hock accepted the approach (so did Kame) and
suggested I should handle all the stats in the same way including
CACHE/RSS. But I do not handle things related to memcg LRU where I
notice you have done some work.

It's possible that we may take different ways to bypass root memcg
stat accounting. The next round of the part will be sent out in
following few days(doing some tests now), and for myself any comments
and collaboration are welcome. (Glad to cc to you of course if you're
also interest in it. :) )

Many thanks!


> Things like MM_ANONPAGES are not percpu, though, and when I add it to
> the memcg percpu structures, I would have to somehow distribute them
> around. When we uncharge, that can become negative.
>
> percpu_counters already handle all that, and then can cope well with
> temporary negative charges in the percpu data, that is later on
> withdrawn from the main base counter.
>
> We are counting pages, so the fact that we're restricted to only half of
> the 64-bit range in percpu counters doesn't seem to be that much of a
> problem.
>
> If this is just a historic leftover, I can replace them all with
> percpu_counters. Any words on that ?
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe cgroups" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Thanks,
Sha

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: per-cpu statistics
@ 2013-03-04  0:55   ` Kamezawa Hiroyuki
  0 siblings, 0 replies; 10+ messages in thread
From: Kamezawa Hiroyuki @ 2013-03-04  0:55 UTC (permalink / raw)
  To: Glauber Costa
  Cc: Michal Hocko, linux-mm, Johannes Weiner, Tejun Heo, Cgroups, Mel Gorman

(2013/02/28 16:59), Glauber Costa wrote:
> Hi guys
>
> Please enlighten me regarding some historic aspect of memcg before I go
> changing something I shouldn't...
>
> Regarding memcg stats, is there any reason for us to use the current
> per-cpu implementation we have instead of a percpu_counter?
>
> We are doing something like this:
>
>          get_online_cpus();
>          for_each_online_cpu(cpu)
>                  val += per_cpu(memcg->stat->count[idx], cpu);
> #ifdef CONFIG_HOTPLUG_CPU
>          spin_lock(&memcg->pcp_counter_lock);
>          val += memcg->nocpu_base.count[idx];
>          spin_unlock(&memcg->pcp_counter_lock);
> #endif
>          put_online_cpus();
>
> It seems to me that we are just re-implementing whatever percpu_counters
> already do, handling the complication ourselves.
>
> It surely is an array, and this keeps the fields together. But does it
> really matter? Did it come from some measurable result?
>
> I wouldn't touch it if it wouldn't be bothering me. But the reason I
> ask, is that I am resurrecting the patches to bypass the root cgroup
> charges when it is the only group in the system. For that, I would like
> to transfer charges from global, to our memcg equivalents.
>
> Things like MM_ANONPAGES are not percpu, though, and when I add it to
> the memcg percpu structures, I would have to somehow distribute them
> around. When we uncharge, that can become negative.
>
> percpu_counters already handle all that, and then can cope well with
> temporary negative charges in the percpu data, that is later on
> withdrawn from the main base counter.
>
> We are counting pages, so the fact that we're restricted to only half of
> the 64-bit range in percpu counters doesn't seem to be that much of a
> problem.
>
> If this is just a historic leftover, I can replace them all with
> percpu_counters. Any words on that ?
>

An reason I didn't like percpu_counter *was* its memory layout.

==
struct percpu_counter {
         raw_spinlock_t lock;
         s64 count;
#ifdef CONFIG_HOTPLUG_CPU
         struct list_head list;  /* All percpu_counters are on a list */
#endif
         s32 __percpu *counters;
};
==

Assume we have counters in an array, then, we'll have

    lock
    count
    list
    pointer
    lock
    count
    list
    pointer
    ....

An counter's lock ops will invalidate pointers in the array.
We tend to update several counters at once.

If you measure performance on enough large SMP and it looks good,
I think it's ok to go with lib/percpu_counter.c.

Thanks,
-Kame
















--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: per-cpu statistics
@ 2013-03-04  0:55   ` Kamezawa Hiroyuki
  0 siblings, 0 replies; 10+ messages in thread
From: Kamezawa Hiroyuki @ 2013-03-04  0:55 UTC (permalink / raw)
  To: Glauber Costa
  Cc: Michal Hocko, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Johannes Weiner,
	Tejun Heo, Cgroups, Mel Gorman

(2013/02/28 16:59), Glauber Costa wrote:
> Hi guys
>
> Please enlighten me regarding some historic aspect of memcg before I go
> changing something I shouldn't...
>
> Regarding memcg stats, is there any reason for us to use the current
> per-cpu implementation we have instead of a percpu_counter?
>
> We are doing something like this:
>
>          get_online_cpus();
>          for_each_online_cpu(cpu)
>                  val += per_cpu(memcg->stat->count[idx], cpu);
> #ifdef CONFIG_HOTPLUG_CPU
>          spin_lock(&memcg->pcp_counter_lock);
>          val += memcg->nocpu_base.count[idx];
>          spin_unlock(&memcg->pcp_counter_lock);
> #endif
>          put_online_cpus();
>
> It seems to me that we are just re-implementing whatever percpu_counters
> already do, handling the complication ourselves.
>
> It surely is an array, and this keeps the fields together. But does it
> really matter? Did it come from some measurable result?
>
> I wouldn't touch it if it wouldn't be bothering me. But the reason I
> ask, is that I am resurrecting the patches to bypass the root cgroup
> charges when it is the only group in the system. For that, I would like
> to transfer charges from global, to our memcg equivalents.
>
> Things like MM_ANONPAGES are not percpu, though, and when I add it to
> the memcg percpu structures, I would have to somehow distribute them
> around. When we uncharge, that can become negative.
>
> percpu_counters already handle all that, and then can cope well with
> temporary negative charges in the percpu data, that is later on
> withdrawn from the main base counter.
>
> We are counting pages, so the fact that we're restricted to only half of
> the 64-bit range in percpu counters doesn't seem to be that much of a
> problem.
>
> If this is just a historic leftover, I can replace them all with
> percpu_counters. Any words on that ?
>

An reason I didn't like percpu_counter *was* its memory layout.

==
struct percpu_counter {
         raw_spinlock_t lock;
         s64 count;
#ifdef CONFIG_HOTPLUG_CPU
         struct list_head list;  /* All percpu_counters are on a list */
#endif
         s32 __percpu *counters;
};
==

Assume we have counters in an array, then, we'll have

    lock
    count
    list
    pointer
    lock
    count
    list
    pointer
    ....

An counter's lock ops will invalidate pointers in the array.
We tend to update several counters at once.

If you measure performance on enough large SMP and it looks good,
I think it's ok to go with lib/percpu_counter.c.

Thanks,
-Kame
















^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: per-cpu statistics
@ 2013-03-04  1:01     ` Tejun Heo
  0 siblings, 0 replies; 10+ messages in thread
From: Tejun Heo @ 2013-03-04  1:01 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: Glauber Costa, Michal Hocko, linux-mm, Johannes Weiner, Cgroups,
	Mel Gorman

Hello,

On Mon, Mar 04, 2013 at 09:55:25AM +0900, Kamezawa Hiroyuki wrote:
> An reason I didn't like percpu_counter *was* its memory layout.
> 
> ==
> struct percpu_counter {
>         raw_spinlock_t lock;
>         s64 count;
> #ifdef CONFIG_HOTPLUG_CPU
>         struct list_head list;  /* All percpu_counters are on a list */
> #endif
>         s32 __percpu *counters;
> };
> ==
> 
> Assume we have counters in an array, then, we'll have
> 
>    lock
>    count
>    list
>    pointer
>    lock
>    count
>    list
>    pointer
>    ....
> 
> An counter's lock ops will invalidate pointers in the array.
> We tend to update several counters at once.

I agree that percpu_counter leaves quite a bit to be desired.  It
would be great if we can implement generic percpu stats facility which
takes care of aggregating the values periodically preferably with
provisions to limit the amount of deviation global counter may reach.

Thansk.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: per-cpu statistics
@ 2013-03-04  1:01     ` Tejun Heo
  0 siblings, 0 replies; 10+ messages in thread
From: Tejun Heo @ 2013-03-04  1:01 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: Glauber Costa, Michal Hocko, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	Johannes Weiner, Cgroups, Mel Gorman

Hello,

On Mon, Mar 04, 2013 at 09:55:25AM +0900, Kamezawa Hiroyuki wrote:
> An reason I didn't like percpu_counter *was* its memory layout.
> 
> ==
> struct percpu_counter {
>         raw_spinlock_t lock;
>         s64 count;
> #ifdef CONFIG_HOTPLUG_CPU
>         struct list_head list;  /* All percpu_counters are on a list */
> #endif
>         s32 __percpu *counters;
> };
> ==
> 
> Assume we have counters in an array, then, we'll have
> 
>    lock
>    count
>    list
>    pointer
>    lock
>    count
>    list
>    pointer
>    ....
> 
> An counter's lock ops will invalidate pointers in the array.
> We tend to update several counters at once.

I agree that percpu_counter leaves quite a bit to be desired.  It
would be great if we can implement generic percpu stats facility which
takes care of aggregating the values periodically preferably with
provisions to limit the amount of deviation global counter may reach.

Thansk.

-- 
tejun

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: per-cpu statistics
@ 2013-03-04  7:25     ` Glauber Costa
  0 siblings, 0 replies; 10+ messages in thread
From: Glauber Costa @ 2013-03-04  7:25 UTC (permalink / raw)
  To: Sha Zhengju
  Cc: KAMEZAWA Hiroyuki, Michal Hocko, linux-mm, Johannes Weiner,
	Tejun Heo, Cgroups, Mel Gorman

On 03/01/2013 05:48 PM, Sha Zhengju wrote:
> Hi Glauber,
> 
> Forgive me, I'm replying not because I know the reason of current
> per-cpu implementation but that I notice you're mentioning something
> I'm also interested in. Below is the detail.
> 
> 
> I'm not sure I fully understand your points, root memcg now don't
> charge page already and only do some page stat
> accounting(CACHE/RSS/SWAP).

Can you point me to the final commits of this in the tree? I am using
the latest git mm from mhocko and it is not entirely clear for me what
are you talking about.

>  Now I'm also trying to do some
> optimization specific to the overhead of root memcg stat accounting,
> and the first attempt is posted here:
> https://lkml.org/lkml/2013/1/2/71 . But it only covered
> FILE_MAPPED/DIRTY/WRITEBACK(I've add the last two accounting in that
> patchset) and Michal Hock accepted the approach (so did Kame) and
> suggested I should handle all the stats in the same way including
> CACHE/RSS. But I do not handle things related to memcg LRU where I
> notice you have done some work.
> 
Yes, LRU is a bit tricky and it is what is keeping me from posting the
patchset I have. I haven't fully done it, but I am on my way.


> It's possible that we may take different ways to bypass root memcg
> stat accounting. The next round of the part will be sent out in
> following few days(doing some tests now), and for myself any comments
> and collaboration are welcome. (Glad to cc to you of course if you're
> also interest in it. :) )
> 

I am interested, of course. As you know, I started to work on this a
while ago and had to interrupt it for a while. I resumed it last week,
but if you managed to merge something already, I'd happy to rebase.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: per-cpu statistics
@ 2013-03-04  7:25     ` Glauber Costa
  0 siblings, 0 replies; 10+ messages in thread
From: Glauber Costa @ 2013-03-04  7:25 UTC (permalink / raw)
  To: Sha Zhengju
  Cc: KAMEZAWA Hiroyuki, Michal Hocko, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	Johannes Weiner, Tejun Heo, Cgroups, Mel Gorman

On 03/01/2013 05:48 PM, Sha Zhengju wrote:
> Hi Glauber,
> 
> Forgive me, I'm replying not because I know the reason of current
> per-cpu implementation but that I notice you're mentioning something
> I'm also interested in. Below is the detail.
> 
> 
> I'm not sure I fully understand your points, root memcg now don't
> charge page already and only do some page stat
> accounting(CACHE/RSS/SWAP).

Can you point me to the final commits of this in the tree? I am using
the latest git mm from mhocko and it is not entirely clear for me what
are you talking about.

>  Now I'm also trying to do some
> optimization specific to the overhead of root memcg stat accounting,
> and the first attempt is posted here:
> https://lkml.org/lkml/2013/1/2/71 . But it only covered
> FILE_MAPPED/DIRTY/WRITEBACK(I've add the last two accounting in that
> patchset) and Michal Hock accepted the approach (so did Kame) and
> suggested I should handle all the stats in the same way including
> CACHE/RSS. But I do not handle things related to memcg LRU where I
> notice you have done some work.
> 
Yes, LRU is a bit tricky and it is what is keeping me from posting the
patchset I have. I haven't fully done it, but I am on my way.


> It's possible that we may take different ways to bypass root memcg
> stat accounting. The next round of the part will be sent out in
> following few days(doing some tests now), and for myself any comments
> and collaboration are welcome. (Glad to cc to you of course if you're
> also interest in it. :) )
> 

I am interested, of course. As you know, I started to work on this a
while ago and had to interrupt it for a while. I resumed it last week,
but if you managed to merge something already, I'd happy to rebase.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: per-cpu statistics
  2013-03-04  7:25     ` Glauber Costa
  (?)
@ 2013-03-05  7:17     ` Sha Zhengju
  -1 siblings, 0 replies; 10+ messages in thread
From: Sha Zhengju @ 2013-03-05  7:17 UTC (permalink / raw)
  To: Glauber Costa
  Cc: KAMEZAWA Hiroyuki, Michal Hocko, linux-mm, Johannes Weiner,
	Tejun Heo, Cgroups, Mel Gorman

On Mon, Mar 4, 2013 at 3:25 PM, Glauber Costa <glommer@parallels.com> wrote:
> On 03/01/2013 05:48 PM, Sha Zhengju wrote:
>> Hi Glauber,
>>
>> Forgive me, I'm replying not because I know the reason of current
>> per-cpu implementation but that I notice you're mentioning something
>> I'm also interested in. Below is the detail.
>>
>>
>> I'm not sure I fully understand your points, root memcg now don't
>> charge page already and only do some page stat
>> accounting(CACHE/RSS/SWAP).
>
> Can you point me to the final commits of this in the tree? I am using
> the latest git mm from mhocko and it is not entirely clear for me what
> are you talking about.

Sorry, maybe my "root memcg charge" is confusing. What I mean is that
root memcg don't do resource counter charge ( mem_cgroup_is_root()
checking in __mem_cgroup_try_charge()) but still need to do other
works(in __mem_cgroup_commit_charge): set pc->mem_cgroup,
SetPageCgroupUsed, and account memcg page statistics such as
CACHE/RSS.

Btw. the original commit is  0c3e73e84f(memcg: improve resource
counter scalability), but it has been drastically modified now. : )

>
>>  Now I'm also trying to do some
>> optimization specific to the overhead of root memcg stat accounting,
>> and the first attempt is posted here:
>> https://lkml.org/lkml/2013/1/2/71 . But it only covered
>> FILE_MAPPED/DIRTY/WRITEBACK(I've add the last two accounting in that
>> patchset) and Michal Hock accepted the approach (so did Kame) and
>> suggested I should handle all the stats in the same way including
>> CACHE/RSS. But I do not handle things related to memcg LRU where I
>> notice you have done some work.
>>
> Yes, LRU is a bit tricky and it is what is keeping me from posting the
> patchset I have. I haven't fully done it, but I am on my way.
>
>
>> It's possible that we may take different ways to bypass root memcg
>> stat accounting. The next round of the part will be sent out in
>> following few days(doing some tests now), and for myself any comments
>> and collaboration are welcome. (Glad to cc to you of course if you're
>> also interest in it. :) )
>>
>
> I am interested, of course. As you know, I started to work on this a
> while ago and had to interrupt it for a while. I resumed it last week,
> but if you managed to merge something already, I'd happy to rebase.
>

I do appreciate your support! Thanks!


Regards,
Sha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-03-05  7:17 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-28  7:59 per-cpu statistics Glauber Costa
2013-03-01 13:48 ` Sha Zhengju
2013-03-01 13:48   ` Sha Zhengju
2013-03-04  7:25   ` Glauber Costa
2013-03-04  7:25     ` Glauber Costa
2013-03-05  7:17     ` Sha Zhengju
2013-03-04  0:55 ` Kamezawa Hiroyuki
2013-03-04  0:55   ` Kamezawa Hiroyuki
2013-03-04  1:01   ` Tejun Heo
2013-03-04  1:01     ` Tejun Heo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.