All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/31] kmemcg shrinkers
@ 2013-05-09  6:06 ` Glauber Costa
  0 siblings, 0 replies; 137+ messages in thread
From: Glauber Costa @ 2013-05-09  6:06 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Mel Gorman, cgroups, kamezawa.hiroyu,
	Johannes Weiner, Michal Hocko, hughd, Greg Thelen, linux-fsdevel

[ Sending again, forgot to CC fsdevel. Shame on me ]
To Mel
======

Mel, I have identified the overly aggressive behavior you noticed to be a bug
in the at-least-one-pass patch, that would ask the shrinkers to scan the full
batch even when total_scan < batch. They would do their best for it, and
eventually succeed. I also went further, and made that the behavior of direct
reclaim only - The only case that really matter for memcg, and one in which
we could argue that we are more or less desperate for small squeezes in memory.
Thank you very much for spotting this.

Running postmark on the final result (at least on my 2-node box) show something
a lot saner. We are still stealing more inodes than before, but by a factor of
around 15 %. Since the correct balance is somewhat heuristic anyway - I
personally think this is acceptable. But I am waiting to hear from you on this
matter. Meanwhile, I am investigating further to try to pinpoint where exactly
this comes from. It might either be because of the new node-aware behavior, or
because of the increased calculation precision in the first patch.

In particular, I haven't done anything about your comment regarding MAX_NODES
array. After the memcg patches are applying, fixing this is a lot easier,
because memcg already departs from a static MAX_NODES array to a dynamic one.
I wanted, however, to keep the noise introduction down in something that I
expect to be merged soon. I would suggest merging a patch that fixes that
on top of the series, instead of the middle, if you really think it matters.
I, of course, commit to doing this in that case.

The work
========

Hi,

This patchset implements targeted shrinking for memcg when kmem limits are
present. So far, we've been accounting kernel objects but failing allocations
when short of memory. This is because our only option would be to call the
global shrinker, depleting objects from all caches and breaking isolation.

The main idea is to associate per-memcg lists with each of the LRUs. The main
LRU still provides a single entry point and when adding or removing an element
from the LRU, we use the page information to figure out which memcg it belongs
to and relay it to the right list.

Base work:
==========

Please note that this builds upon the recent work from Dave Chinner that
sanitizes the LRU shrinking API and make the shrinkers node aware. Node
awareness is not *strictly* needed for my work, but I still perceive it
as an advantage. The API unification is a major need, and I build upon it
heavily. That allows us to manipulate the LRUs without knowledge of the
underlying objects with ease. This time, I am including that work here as
a baseline.

Main changes from *v4:
* Fixed a bug in user-generated memcg pressure
* Fixed overly-agressive slab shrinker behavior spotted by Mel Gorman
* Various other fixes and comments by Mel Gorman

Main changes from *v3:
* Merged suggestions from mailing list.
* Removed the memcg-walking code from LRU. vmscan now drives all the hierarchy
  decisions, which makes more sense
* lazily free the old memcg arrays (needs now to be saved in struct lru). Since
  we need to call synchronize_rcu, calling it for every LRU can become expensive
* Moved the dead memcg shrinker to vmpressure. Already independently sent to
  linux-mm for review.
* Changed locking convention for LRU_RETRY. It now needs to return locked, which
  silents warnings about possible lock unbalance (although previous code was
  correct)

Main changes from *v2:
* shrink dead memcgs when global pressure kicks in. Uses the new lru API.
* bugfixes and comments from the mailing list.
* proper hierarchy-aware walk in shrink_slab.

Main changes from *v1:
* merged comments from the mailing list
* reworked lru-memcg API
* effective proportional shrinking
* sanitized locking on the memcg side
* bill user memory first when kmem == umem
* various bugfixes


Dave Chinner (17):
  dcache: convert dentry_stat.nr_unused to per-cpu counters
  dentry: move to per-sb LRU locks
  dcache: remove dentries from LRU before putting on dispose list
  mm: new shrinker API
  shrinker: convert superblock shrinkers to new API
  list: add a new LRU list type
  inode: convert inode lru list to generic lru list code.
  dcache: convert to use new lru list infrastructure
  list_lru: per-node list infrastructure
  shrinker: add node awareness
  fs: convert inode and dentry shrinking to be node aware
  xfs: convert buftarg LRU to generic code
  xfs: convert dquot cache lru to list_lru
  fs: convert fs shrinkers to new scan/count API
  drivers: convert shrinkers to new count/scan API
  shrinker: convert remaining shrinkers to count/scan API
  shrinker: Kill old ->shrink API.

Glauber Costa (14):
  super: fix calculation of shrinkable objects for small numbers
  vmscan: take at least one pass with shrinkers
  hugepage: convert huge zero page shrinker to new shrinker API
  vmscan: also shrink slab in memcg pressure
  memcg,list_lru: duplicate LRUs upon kmemcg creation
  lru: add an element to a memcg list
  list_lru: per-memcg walks
  memcg: per-memcg kmem shrinking
  memcg: scan cache objects hierarchically
  super: targeted memcg reclaim
  memcg: move initialization to memcg creation
  vmpressure: in-kernel notifications
  memcg: reap dead memcgs upon global memory pressure.
  memcg: debugging facility to access dangling memcgs

 Documentation/cgroups/memory.txt          |  16 +
 arch/x86/kvm/mmu.c                        |  28 +-
 drivers/gpu/drm/i915/i915_dma.c           |   4 +-
 drivers/gpu/drm/i915/i915_gem.c           |  67 +++-
 drivers/gpu/drm/ttm/ttm_page_alloc.c      |  48 ++-
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c  |  55 ++-
 drivers/md/bcache/btree.c                 |  30 +-
 drivers/md/bcache/sysfs.c                 |   2 +-
 drivers/md/dm-bufio.c                     |  65 ++--
 drivers/staging/android/ashmem.c          |  46 ++-
 drivers/staging/android/lowmemorykiller.c |  40 +-
 drivers/staging/zcache/zcache-main.c      |  29 +-
 fs/dcache.c                               | 234 +++++++-----
 fs/drop_caches.c                          |   1 +
 fs/ext4/extents_status.c                  |  30 +-
 fs/gfs2/glock.c                           |  30 +-
 fs/gfs2/main.c                            |   3 +-
 fs/gfs2/quota.c                           |  14 +-
 fs/gfs2/quota.h                           |   4 +-
 fs/inode.c                                | 175 ++++-----
 fs/internal.h                             |   5 +
 fs/mbcache.c                              |  53 +--
 fs/nfs/dir.c                              |  20 +-
 fs/nfs/internal.h                         |   4 +-
 fs/nfs/super.c                            |   3 +-
 fs/nfsd/nfscache.c                        |  31 +-
 fs/quota/dquot.c                          |  39 +-
 fs/super.c                                | 107 ++++--
 fs/ubifs/shrinker.c                       |  20 +-
 fs/ubifs/super.c                          |   3 +-
 fs/ubifs/ubifs.h                          |   3 +-
 fs/xfs/xfs_buf.c                          | 169 ++++----
 fs/xfs/xfs_buf.h                          |   5 +-
 fs/xfs/xfs_dquot.c                        |   7 +-
 fs/xfs/xfs_icache.c                       |   4 +-
 fs/xfs/xfs_icache.h                       |   2 +-
 fs/xfs/xfs_qm.c                           | 275 ++++++-------
 fs/xfs/xfs_qm.h                           |   4 +-
 fs/xfs/xfs_super.c                        |  12 +-
 include/linux/dcache.h                    |   4 +
 include/linux/fs.h                        |  25 +-
 include/linux/list_lru.h                  | 134 +++++++
 include/linux/memcontrol.h                |  45 +++
 include/linux/shrinker.h                  |  44 ++-
 include/linux/swap.h                      |   2 +
 include/linux/vmpressure.h                |   6 +
 include/trace/events/vmscan.h             |   4 +-
 init/Kconfig                              |  17 +
 lib/Makefile                              |   2 +-
 lib/list_lru.c                            | 430 +++++++++++++++++++++
 mm/huge_memory.c                          |  17 +-
 mm/memcontrol.c                           | 614 +++++++++++++++++++++++++++---
 mm/memory-failure.c                       |   2 +
 mm/slab_common.c                          |   1 -
 mm/vmpressure.c                           |  52 ++-
 mm/vmscan.c                               | 334 +++++++++++-----
 net/sunrpc/auth.c                         |  45 ++-
 57 files changed, 2537 insertions(+), 928 deletions(-)
 create mode 100644 include/linux/list_lru.h
 create mode 100644 lib/list_lru.c

-- 
1.8.1.4


^ permalink raw reply	[flat|nested] 137+ messages in thread
* [PATCH v5 00/31] kmemcg shrinkers
@ 2013-05-08 20:22 Glauber Costa
  2013-05-08 20:22   ` Glauber Costa
  0 siblings, 1 reply; 137+ messages in thread
From: Glauber Costa @ 2013-05-08 20:22 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Mel Gorman, cgroups, kamezawa.hiroyu,
	Johannes Weiner, Michal Hocko, hughd, Greg Thelen

To Mel
======

Mel, I have identified the overly aggressive behavior you noticed to be a bug
in the at-least-one-pass patch, that would ask the shrinkers to scan the full
batch even when total_scan < batch. They would do their best for it, and
eventually succeed. I also went further, and made that the behavior of direct
reclaim only - The only case that really matter for memcg, and one in which
we could argue that we are more or less desperate for small squeezes in memory.
Thank you very much for spotting this.

Running postmark on the final result (at least on my 2-node box) show something
a lot saner. We are still stealing more inodes than before, but by a factor of
around 15 %. Since the correct balance is somewhat heuristic anyway - I
personally think this is acceptable. But I am waiting to hear from you on this
matter. Meanwhile, I am investigating further to try to pinpoint where exactly
this comes from. It might either be because of the new node-aware behavior, or
because of the increased calculation precision in the first patch.

In particular, I haven't done anything about your comment regarding MAX_NODES
array. After the memcg patches are applying, fixing this is a lot easier,
because memcg already departs from a static MAX_NODES array to a dynamic one.
I wanted, however, to keep the noise introduction down in something that I
expect to be merged soon. I would suggest merging a patch that fixes that
on top of the series, instead of the middle, if you really think it matters.
I, of course, commit to doing this in that case.

The work
========

Hi,

This patchset implements targeted shrinking for memcg when kmem limits are
present. So far, we've been accounting kernel objects but failing allocations
when short of memory. This is because our only option would be to call the
global shrinker, depleting objects from all caches and breaking isolation.

The main idea is to associate per-memcg lists with each of the LRUs. The main
LRU still provides a single entry point and when adding or removing an element
from the LRU, we use the page information to figure out which memcg it belongs
to and relay it to the right list.

Base work:
==========

Please note that this builds upon the recent work from Dave Chinner that
sanitizes the LRU shrinking API and make the shrinkers node aware. Node
awareness is not *strictly* needed for my work, but I still perceive it
as an advantage. The API unification is a major need, and I build upon it
heavily. That allows us to manipulate the LRUs without knowledge of the
underlying objects with ease. This time, I am including that work here as
a baseline.

Main changes from *v4:
* Fixed a bug in user-generated memcg pressure
* Fixed overly-agressive slab shrinker behavior spotted by Mel Gorman
* Various other fixes and comments by Mel Gorman

Main changes from *v3:
* Merged suggestions from mailing list.
* Removed the memcg-walking code from LRU. vmscan now drives all the hierarchy
  decisions, which makes more sense
* lazily free the old memcg arrays (needs now to be saved in struct lru). Since
  we need to call synchronize_rcu, calling it for every LRU can become expensive
* Moved the dead memcg shrinker to vmpressure. Already independently sent to
  linux-mm for review.
* Changed locking convention for LRU_RETRY. It now needs to return locked, which
  silents warnings about possible lock unbalance (although previous code was
  correct)

Main changes from *v2:
* shrink dead memcgs when global pressure kicks in. Uses the new lru API.
* bugfixes and comments from the mailing list.
* proper hierarchy-aware walk in shrink_slab.

Main changes from *v1:
* merged comments from the mailing list
* reworked lru-memcg API
* effective proportional shrinking
* sanitized locking on the memcg side
* bill user memory first when kmem == umem
* various bugfixes


Dave Chinner (17):
  dcache: convert dentry_stat.nr_unused to per-cpu counters
  dentry: move to per-sb LRU locks
  dcache: remove dentries from LRU before putting on dispose list
  mm: new shrinker API
  shrinker: convert superblock shrinkers to new API
  list: add a new LRU list type
  inode: convert inode lru list to generic lru list code.
  dcache: convert to use new lru list infrastructure
  list_lru: per-node list infrastructure
  shrinker: add node awareness
  fs: convert inode and dentry shrinking to be node aware
  xfs: convert buftarg LRU to generic code
  xfs: convert dquot cache lru to list_lru
  fs: convert fs shrinkers to new scan/count API
  drivers: convert shrinkers to new count/scan API
  shrinker: convert remaining shrinkers to count/scan API
  shrinker: Kill old ->shrink API.

Glauber Costa (14):
  super: fix calculation of shrinkable objects for small numbers
  vmscan: take at least one pass with shrinkers
  hugepage: convert huge zero page shrinker to new shrinker API
  vmscan: also shrink slab in memcg pressure
  memcg,list_lru: duplicate LRUs upon kmemcg creation
  lru: add an element to a memcg list
  list_lru: per-memcg walks
  memcg: per-memcg kmem shrinking
  memcg: scan cache objects hierarchically
  super: targeted memcg reclaim
  memcg: move initialization to memcg creation
  vmpressure: in-kernel notifications
  memcg: reap dead memcgs upon global memory pressure.
  memcg: debugging facility to access dangling memcgs

 Documentation/cgroups/memory.txt          |  16 +
 arch/x86/kvm/mmu.c                        |  28 +-
 drivers/gpu/drm/i915/i915_dma.c           |   4 +-
 drivers/gpu/drm/i915/i915_gem.c           |  67 +++-
 drivers/gpu/drm/ttm/ttm_page_alloc.c      |  48 ++-
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c  |  55 ++-
 drivers/md/bcache/btree.c                 |  30 +-
 drivers/md/bcache/sysfs.c                 |   2 +-
 drivers/md/dm-bufio.c                     |  65 ++--
 drivers/staging/android/ashmem.c          |  46 ++-
 drivers/staging/android/lowmemorykiller.c |  40 +-
 drivers/staging/zcache/zcache-main.c      |  29 +-
 fs/dcache.c                               | 234 +++++++-----
 fs/drop_caches.c                          |   1 +
 fs/ext4/extents_status.c                  |  30 +-
 fs/gfs2/glock.c                           |  30 +-
 fs/gfs2/main.c                            |   3 +-
 fs/gfs2/quota.c                           |  14 +-
 fs/gfs2/quota.h                           |   4 +-
 fs/inode.c                                | 175 ++++-----
 fs/internal.h                             |   5 +
 fs/mbcache.c                              |  53 +--
 fs/nfs/dir.c                              |  20 +-
 fs/nfs/internal.h                         |   4 +-
 fs/nfs/super.c                            |   3 +-
 fs/nfsd/nfscache.c                        |  31 +-
 fs/quota/dquot.c                          |  39 +-
 fs/super.c                                | 107 ++++--
 fs/ubifs/shrinker.c                       |  20 +-
 fs/ubifs/super.c                          |   3 +-
 fs/ubifs/ubifs.h                          |   3 +-
 fs/xfs/xfs_buf.c                          | 169 ++++----
 fs/xfs/xfs_buf.h                          |   5 +-
 fs/xfs/xfs_dquot.c                        |   7 +-
 fs/xfs/xfs_icache.c                       |   4 +-
 fs/xfs/xfs_icache.h                       |   2 +-
 fs/xfs/xfs_qm.c                           | 275 ++++++-------
 fs/xfs/xfs_qm.h                           |   4 +-
 fs/xfs/xfs_super.c                        |  12 +-
 include/linux/dcache.h                    |   4 +
 include/linux/fs.h                        |  25 +-
 include/linux/list_lru.h                  | 134 +++++++
 include/linux/memcontrol.h                |  45 +++
 include/linux/shrinker.h                  |  44 ++-
 include/linux/swap.h                      |   2 +
 include/linux/vmpressure.h                |   6 +
 include/trace/events/vmscan.h             |   4 +-
 init/Kconfig                              |  17 +
 lib/Makefile                              |   2 +-
 lib/list_lru.c                            | 430 +++++++++++++++++++++
 mm/huge_memory.c                          |  17 +-
 mm/memcontrol.c                           | 614 +++++++++++++++++++++++++++---
 mm/memory-failure.c                       |   2 +
 mm/slab_common.c                          |   1 -
 mm/vmpressure.c                           |  52 ++-
 mm/vmscan.c                               | 334 +++++++++++-----
 net/sunrpc/auth.c                         |  45 ++-
 57 files changed, 2537 insertions(+), 928 deletions(-)
 create mode 100644 include/linux/list_lru.h
 create mode 100644 lib/list_lru.c

-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 137+ messages in thread

end of thread, other threads:[~2013-05-10 10:01 UTC | newest]

Thread overview: 137+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-09  6:06 [PATCH v5 00/31] kmemcg shrinkers Glauber Costa
2013-05-09  6:06 ` Glauber Costa
2013-05-09  6:06 ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 01/31] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 02/31] vmscan: take at least one pass with shrinkers Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09 11:12   ` Mel Gorman
2013-05-09 11:12     ` Mel Gorman
     [not found]     ` <20130509111226.GR11497-l3A5Bk7waGM@public.gmane.org>
2013-05-09 11:28       ` Glauber Costa
2013-05-09 11:28         ` Glauber Costa
2013-05-09 11:28         ` Glauber Costa
     [not found]         ` <518B884C.9090704-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-05-09 11:35           ` Glauber Costa
2013-05-09 11:35             ` Glauber Costa
2013-05-09 11:35             ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 03/31] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 04/31] dentry: move to per-sb LRU locks Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
     [not found]   ` <1368079608-5611-5-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
2013-05-10  5:29     ` Dave Chinner
2013-05-10  5:29       ` Dave Chinner
2013-05-10  8:16       ` Dave Chinner
2013-05-09  6:06 ` [PATCH v5 05/31] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 06/31] mm: new shrinker API Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09 13:30   ` Mel Gorman
     [not found] ` <1368079608-5611-1-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
2013-05-09  6:06   ` [PATCH v5 07/31] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-05-09  6:06     ` Glauber Costa
2013-05-09  6:06     ` Glauber Costa
2013-05-09 13:33     ` Mel Gorman
2013-05-09  6:06 ` [PATCH v5 08/31] list: add a new LRU list type Glauber Costa
2013-05-09  6:06   ` Glauber Costa
     [not found]   ` <1368079608-5611-9-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
2013-05-09 13:37     ` Mel Gorman
2013-05-09 13:37       ` Mel Gorman
     [not found]       ` <20130509133742.GW11497-l3A5Bk7waGM@public.gmane.org>
2013-05-09 21:02         ` Glauber Costa
2013-05-09 21:02           ` Glauber Costa
2013-05-09 21:02           ` Glauber Costa
2013-05-10  9:21           ` Mel Gorman
2013-05-10  9:56             ` Glauber Costa
2013-05-10  9:56               ` Glauber Costa
     [not found]               ` <518CC44D.1020409-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-05-10 10:01                 ` Mel Gorman
2013-05-10 10:01                   ` Mel Gorman
2013-05-09  6:06 ` [PATCH v5 09/31] inode: convert inode lru list to generic lru list code Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 10/31] dcache: convert to use new lru list infrastructure Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 11/31] list_lru: per-node " Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09 13:42   ` Mel Gorman
     [not found]     ` <20130509134246.GX11497-l3A5Bk7waGM@public.gmane.org>
2013-05-09 21:05       ` Glauber Costa
2013-05-09 21:05         ` Glauber Costa
2013-05-09 21:05         ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 12/31] shrinker: add node awareness Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 13/31] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 14/31] xfs: convert buftarg LRU to generic code Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
     [not found]   ` <1368079608-5611-15-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
2013-05-09 13:43     ` Mel Gorman
2013-05-09 13:43       ` Mel Gorman
2013-05-09  6:06 ` [PATCH v5 15/31] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 16/31] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 17/31] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09 13:52   ` Mel Gorman
2013-05-09 13:52     ` Mel Gorman
     [not found]     ` <20130509135209.GZ11497-l3A5Bk7waGM@public.gmane.org>
2013-05-09 21:19       ` Glauber Costa
2013-05-09 21:19         ` Glauber Costa
2013-05-09 21:19         ` Glauber Costa
2013-05-10  9:00         ` Mel Gorman
2013-05-09  6:06 ` [PATCH v5 18/31] shrinker: convert remaining shrinkers to " Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 19/31] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
     [not found]   ` <1368079608-5611-20-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
2013-05-10  1:24     ` Kirill A. Shutemov
2013-05-10  1:24       ` Kirill A. Shutemov
2013-05-09  6:06 ` [PATCH v5 20/31] shrinker: Kill old ->shrink API Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09 13:53   ` Mel Gorman
2013-05-09  6:06 ` [PATCH v5 21/31] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 22/31] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 23/31] lru: add an element to a memcg list Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 24/31] list_lru: per-memcg walks Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 25/31] memcg: per-memcg kmem shrinking Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 26/31] memcg: scan cache objects hierarchically Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 27/31] super: targeted memcg reclaim Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 28/31] memcg: move initialization to memcg creation Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 29/31] vmpressure: in-kernel notifications Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 30/31] memcg: reap dead memcgs upon global memory pressure Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06 ` [PATCH v5 31/31] memcg: debugging facility to access dangling memcgs Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09  6:06   ` Glauber Costa
2013-05-09 10:55 ` [PATCH v5 00/31] kmemcg shrinkers Mel Gorman
     [not found]   ` <20130509105519.GQ11497-l3A5Bk7waGM@public.gmane.org>
2013-05-09 11:34     ` Glauber Costa
2013-05-09 11:34       ` Glauber Costa
2013-05-09 11:34       ` Glauber Costa
2013-05-09 13:18   ` Dave Chinner
2013-05-09 14:03     ` Mel Gorman
     [not found]       ` <20130509140311.GB11497-l3A5Bk7waGM@public.gmane.org>
2013-05-09 21:24         ` Glauber Costa
2013-05-09 21:24           ` Glauber Costa
2013-05-09 21:24           ` Glauber Costa
  -- strict thread matches above, loose matches on Subject: below --
2013-05-08 20:22 Glauber Costa
2013-05-08 20:22 ` [PATCH v5 09/31] inode: convert inode lru list to generic lru list code Glauber Costa
2013-05-08 20:22   ` Glauber Costa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.