All of lore.kernel.org
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Mel Gorman <mgorman@suse.de>
Cc: Linux-MM <linux-mm@kvack.org>, Jiri Slaby <jslaby@suse.cz>,
	Valdis Kletnieks <Valdis.Kletnieks@vt.edu>,
	Rik van Riel <riel@redhat.com>,
	Zlatko Calusic <zcalusic@bitsync.net>,
	dormando <dormando@rydia.net>,
	Satoru Moriya <satoru.moriya@hds.com>,
	Michal Hocko <mhocko@suse.cz>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 02/10] mm: vmscan: Obey proportional scanning requirements for kswapd
Date: Fri, 22 Mar 2013 15:09:02 -0400	[thread overview]
Message-ID: <20130322190902.GA4611@cmpxchg.org> (raw)
In-Reply-To: <20130322182556.GB32241@suse.de>

On Fri, Mar 22, 2013 at 06:25:56PM +0000, Mel Gorman wrote:
> On Fri, Mar 22, 2013 at 12:53:49PM -0400, Johannes Weiner wrote:
> > So would it make sense to determine the percentage scanned of the type
> > that we stop scanning, then scale the original goal of the remaining
> > LRUs to that percentage, and scan the remainder?
> 
> To preserve existing behaviour, that makes sense. I'm not convinced that
> it's necessarily the best idea but altering it would be beyond the scope
> of this series and bite off more than I'm willing to chew. This actually
> simplifies things a bit and shrink_lruvec turns into the (untested) code
> below. It does not do exact proportional scanning but I do not think it's
> necessary to either and is a useful enough approximation. It still could
> end up reclaiming much more than sc->nr_to_reclaim unfortunately but fixing
> it requires reworking how kswapd scans at different priorities.

In which way does it not do exact proportional scanning?  I commented
on one issue below, but maybe you were referring to something else.

Yes, it's a little unfortunate that we escalate to a gigantic scan
window first, and then have to contort ourselves in the process of
backing off gracefully after we reclaimed a few pages...

> Is this closer to what you had in mind?
> 
> static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
> {
> 	unsigned long nr[NR_LRU_LISTS];
> 	unsigned long nr_to_scan;
> 	enum lru_list lru;
> 	unsigned long nr_reclaimed = 0;
> 	unsigned long nr_to_reclaim = sc->nr_to_reclaim;
> 	unsigned long nr_anon_scantarget, nr_file_scantarget;
> 	struct blk_plug plug;
> 	bool scan_adjusted = false;
> 
> 	get_scan_count(lruvec, sc, nr);
> 
> 	/* Record the original scan target for proportional adjustments later */
> 	nr_file_scantarget = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE] + 1;
> 	nr_anon_scantarget = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON] + 1;
> 
> 	blk_start_plug(&plug);
> 	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
> 					nr[LRU_INACTIVE_FILE]) {
> 		unsigned long nr_anon, nr_file, percentage;
> 
> 		for_each_evictable_lru(lru) {
> 			if (nr[lru]) {
> 				nr_to_scan = min(nr[lru], SWAP_CLUSTER_MAX);
> 				nr[lru] -= nr_to_scan;
> 
> 				nr_reclaimed += shrink_list(lru, nr_to_scan,
> 							    lruvec, sc);
> 			}
> 		}
> 
> 		if (nr_reclaimed < nr_to_reclaim || scan_adjusted)
> 			continue;
> 
> 		/*
> 		 * For global direct reclaim, reclaim only the number of pages
> 		 * requested. Less care is taken to scan proportionally as it
> 		 * is more important to minimise direct reclaim stall latency
> 		 * than it is to properly age the LRU lists.
> 		 */
> 		if (global_reclaim(sc) && !current_is_kswapd())
> 			break;
> 
> 		/*
> 		 * For kswapd and memcg, reclaim at least the number of pages
> 		 * requested. Ensure that the anon and file LRUs shrink
> 		 * proportionally what was requested by get_scan_count(). We
> 		 * stop reclaiming one LRU and reduce the amount scanning
> 		 * proportional to the original scan target.
> 		 */
> 		nr_file = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE];
> 		nr_anon = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON];
> 
> 		if (nr_file > nr_anon) {
> 			lru = LRU_BASE;
> 			percentage = nr_anon * 100 / nr_anon_scantarget;
> 		} else {
> 			lru = LRU_FILE;
> 			percentage = nr_file * 100 / nr_file_scantarget;
> 		}
> 
> 		/* Stop scanning the smaller of the LRU */
> 		nr[lru] = 0;
> 		nr[lru + LRU_ACTIVE] = 0;
> 
> 		/* Reduce scanning of the other LRU proportionally */
> 		lru = (lru == LRU_FILE) ? LRU_BASE : LRU_FILE;
> 		nr[lru] = nr[lru] * percentage / 100;;
> 		nr[lru + LRU_ACTIVE] = nr[lru + LRU_ACTIVE] * percentage / 100;

The percentage is taken from the original goal but then applied to the
remainder of scan goal for the LRUs we continue scanning.  The more
pages that have already been scanned, the more inaccurate this gets.
Is that what you had in mind with useful enough approximation?

WARNING: multiple messages have this Message-ID (diff)
From: Johannes Weiner <hannes@cmpxchg.org>
To: Mel Gorman <mgorman@suse.de>
Cc: Linux-MM <linux-mm@kvack.org>, Jiri Slaby <jslaby@suse.cz>,
	Valdis Kletnieks <Valdis.Kletnieks@vt.edu>,
	Rik van Riel <riel@redhat.com>,
	Zlatko Calusic <zcalusic@bitsync.net>,
	dormando <dormando@rydia.net>,
	Satoru Moriya <satoru.moriya@hds.com>,
	Michal Hocko <mhocko@suse.cz>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 02/10] mm: vmscan: Obey proportional scanning requirements for kswapd
Date: Fri, 22 Mar 2013 15:09:02 -0400	[thread overview]
Message-ID: <20130322190902.GA4611@cmpxchg.org> (raw)
In-Reply-To: <20130322182556.GB32241@suse.de>

On Fri, Mar 22, 2013 at 06:25:56PM +0000, Mel Gorman wrote:
> On Fri, Mar 22, 2013 at 12:53:49PM -0400, Johannes Weiner wrote:
> > So would it make sense to determine the percentage scanned of the type
> > that we stop scanning, then scale the original goal of the remaining
> > LRUs to that percentage, and scan the remainder?
> 
> To preserve existing behaviour, that makes sense. I'm not convinced that
> it's necessarily the best idea but altering it would be beyond the scope
> of this series and bite off more than I'm willing to chew. This actually
> simplifies things a bit and shrink_lruvec turns into the (untested) code
> below. It does not do exact proportional scanning but I do not think it's
> necessary to either and is a useful enough approximation. It still could
> end up reclaiming much more than sc->nr_to_reclaim unfortunately but fixing
> it requires reworking how kswapd scans at different priorities.

In which way does it not do exact proportional scanning?  I commented
on one issue below, but maybe you were referring to something else.

Yes, it's a little unfortunate that we escalate to a gigantic scan
window first, and then have to contort ourselves in the process of
backing off gracefully after we reclaimed a few pages...

> Is this closer to what you had in mind?
> 
> static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
> {
> 	unsigned long nr[NR_LRU_LISTS];
> 	unsigned long nr_to_scan;
> 	enum lru_list lru;
> 	unsigned long nr_reclaimed = 0;
> 	unsigned long nr_to_reclaim = sc->nr_to_reclaim;
> 	unsigned long nr_anon_scantarget, nr_file_scantarget;
> 	struct blk_plug plug;
> 	bool scan_adjusted = false;
> 
> 	get_scan_count(lruvec, sc, nr);
> 
> 	/* Record the original scan target for proportional adjustments later */
> 	nr_file_scantarget = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE] + 1;
> 	nr_anon_scantarget = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON] + 1;
> 
> 	blk_start_plug(&plug);
> 	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
> 					nr[LRU_INACTIVE_FILE]) {
> 		unsigned long nr_anon, nr_file, percentage;
> 
> 		for_each_evictable_lru(lru) {
> 			if (nr[lru]) {
> 				nr_to_scan = min(nr[lru], SWAP_CLUSTER_MAX);
> 				nr[lru] -= nr_to_scan;
> 
> 				nr_reclaimed += shrink_list(lru, nr_to_scan,
> 							    lruvec, sc);
> 			}
> 		}
> 
> 		if (nr_reclaimed < nr_to_reclaim || scan_adjusted)
> 			continue;
> 
> 		/*
> 		 * For global direct reclaim, reclaim only the number of pages
> 		 * requested. Less care is taken to scan proportionally as it
> 		 * is more important to minimise direct reclaim stall latency
> 		 * than it is to properly age the LRU lists.
> 		 */
> 		if (global_reclaim(sc) && !current_is_kswapd())
> 			break;
> 
> 		/*
> 		 * For kswapd and memcg, reclaim at least the number of pages
> 		 * requested. Ensure that the anon and file LRUs shrink
> 		 * proportionally what was requested by get_scan_count(). We
> 		 * stop reclaiming one LRU and reduce the amount scanning
> 		 * proportional to the original scan target.
> 		 */
> 		nr_file = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE];
> 		nr_anon = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON];
> 
> 		if (nr_file > nr_anon) {
> 			lru = LRU_BASE;
> 			percentage = nr_anon * 100 / nr_anon_scantarget;
> 		} else {
> 			lru = LRU_FILE;
> 			percentage = nr_file * 100 / nr_file_scantarget;
> 		}
> 
> 		/* Stop scanning the smaller of the LRU */
> 		nr[lru] = 0;
> 		nr[lru + LRU_ACTIVE] = 0;
> 
> 		/* Reduce scanning of the other LRU proportionally */
> 		lru = (lru == LRU_FILE) ? LRU_BASE : LRU_FILE;
> 		nr[lru] = nr[lru] * percentage / 100;;
> 		nr[lru + LRU_ACTIVE] = nr[lru + LRU_ACTIVE] * percentage / 100;

The percentage is taken from the original goal but then applied to the
remainder of scan goal for the LRUs we continue scanning.  The more
pages that have already been scanned, the more inaccurate this gets.
Is that what you had in mind with useful enough approximation?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-03-22 19:10 UTC|newest]

Thread overview: 268+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-17 13:04 [RFC PATCH 0/8] Reduce system disruption due to kswapd Mel Gorman
2013-03-17 13:04 ` Mel Gorman
2013-03-17 13:04 ` [PATCH 01/10] mm: vmscan: Limit the number of pages kswapd reclaims at each priority Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-18 23:53   ` Simon Jeons
2013-03-18 23:53     ` Simon Jeons
2013-03-19  9:55     ` Mel Gorman
2013-03-19  9:55       ` Mel Gorman
2013-03-19 10:16       ` Simon Jeons
2013-03-19 10:16         ` Simon Jeons
2013-03-19 10:59         ` Mel Gorman
2013-03-19 10:59           ` Mel Gorman
2013-03-20 16:18   ` Michal Hocko
2013-03-20 16:18     ` Michal Hocko
2013-03-21  0:52     ` Rik van Riel
2013-03-21  0:52       ` Rik van Riel
2013-03-22  0:08       ` Will Huck
2013-03-22  0:08         ` Will Huck
2013-03-21  9:47     ` Mel Gorman
2013-03-21  9:47       ` Mel Gorman
2013-03-21 12:59       ` Michal Hocko
2013-03-21 12:59         ` Michal Hocko
2013-03-21  0:51   ` Rik van Riel
2013-03-21  0:51     ` Rik van Riel
2013-03-21 15:57   ` Johannes Weiner
2013-03-21 15:57     ` Johannes Weiner
2013-03-21 16:47     ` Mel Gorman
2013-03-21 16:47       ` Mel Gorman
2013-03-22  0:05     ` Will Huck
2013-03-22  0:05       ` Will Huck
2013-03-22  3:52       ` Rik van Riel
2013-03-22  3:52         ` Rik van Riel
2013-03-22  3:56         ` Will Huck
2013-03-22  3:56           ` Will Huck
2013-03-22  4:59           ` Will Huck
2013-03-22  4:59             ` Will Huck
2013-03-22 13:01             ` Rik van Riel
2013-03-22 13:01               ` Rik van Riel
2013-04-05  0:05               ` Will Huck
2013-04-05  0:05                 ` Will Huck
2013-04-07  7:32                 ` Will Huck
2013-04-07  7:32                   ` Will Huck
2013-04-07  7:35                 ` Will Huck
2013-04-07  7:35                   ` Will Huck
2013-04-11  5:54         ` Will Huck
2013-04-11  5:54           ` Will Huck
2013-04-11  5:58         ` Will Huck
2013-04-11  5:58           ` Will Huck
2013-04-12  5:46           ` Ric Mason
2013-04-12  5:46             ` Ric Mason
2013-04-12  9:34             ` Mel Gorman
2013-04-12  9:34               ` Mel Gorman
2013-04-12 13:40               ` Rik van Riel
2013-04-12 13:40                 ` Rik van Riel
2013-03-25  9:07   ` Michal Hocko
2013-03-25  9:07     ` Michal Hocko
2013-03-25  9:13     ` Jiri Slaby
2013-03-25  9:13       ` Jiri Slaby
2013-03-28 22:31       ` Jiri Slaby
2013-03-28 22:31         ` Jiri Slaby
2013-03-29  8:22         ` Michal Hocko
2013-03-29  8:22           ` Michal Hocko
2013-03-30 22:07           ` Jiri Slaby
2013-03-30 22:07             ` Jiri Slaby
2013-04-02 11:15             ` Mel Gorman
2013-04-02 11:15               ` Mel Gorman
2013-03-17 13:04 ` [PATCH 02/10] mm: vmscan: Obey proportional scanning requirements for kswapd Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-17 14:39   ` Andi Kleen
2013-03-17 14:39     ` Andi Kleen
2013-03-17 15:08     ` Mel Gorman
2013-03-17 15:08       ` Mel Gorman
2013-03-21  1:10   ` Rik van Riel
2013-03-21  1:10     ` Rik van Riel
2013-03-21  9:54     ` Mel Gorman
2013-03-21  9:54       ` Mel Gorman
2013-03-21 14:01   ` Michal Hocko
2013-03-21 14:01     ` Michal Hocko
2013-03-21 14:31     ` Mel Gorman
2013-03-21 14:31       ` Mel Gorman
2013-03-21 15:07       ` Michal Hocko
2013-03-21 15:07         ` Michal Hocko
2013-03-21 15:34         ` Mel Gorman
2013-03-21 15:34           ` Mel Gorman
2013-03-22  7:54           ` Michal Hocko
2013-03-22  7:54             ` Michal Hocko
2013-03-22  8:37             ` Mel Gorman
2013-03-22  8:37               ` Mel Gorman
2013-03-22 10:04               ` Michal Hocko
2013-03-22 10:04                 ` Michal Hocko
2013-03-22 10:47                 ` Michal Hocko
2013-03-22 10:47                   ` Michal Hocko
2013-03-21 16:25   ` Johannes Weiner
2013-03-21 16:25     ` Johannes Weiner
2013-03-21 18:02     ` Mel Gorman
2013-03-21 18:02       ` Mel Gorman
2013-03-22 16:53       ` Johannes Weiner
2013-03-22 16:53         ` Johannes Weiner
2013-03-22 18:25         ` Mel Gorman
2013-03-22 18:25           ` Mel Gorman
2013-03-22 19:09           ` Johannes Weiner [this message]
2013-03-22 19:09             ` Johannes Weiner
2013-03-22 19:46             ` Mel Gorman
2013-03-22 19:46               ` Mel Gorman
2013-03-17 13:04 ` [PATCH 03/10] mm: vmscan: Flatten kswapd priority loop Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-17 14:36   ` Andi Kleen
2013-03-17 14:36     ` Andi Kleen
2013-03-17 15:09     ` Mel Gorman
2013-03-17 15:09       ` Mel Gorman
2013-03-18  7:02   ` Hillf Danton
2013-03-19 10:01     ` Mel Gorman
2013-03-18 23:58   ` Simon Jeons
2013-03-18 23:58     ` Simon Jeons
2013-03-19 10:12     ` Mel Gorman
2013-03-19 10:12       ` Mel Gorman
2013-03-19  3:08   ` Simon Jeons
2013-03-19  3:08     ` Simon Jeons
2013-03-19  8:23     ` Michal Hocko
2013-03-19  8:23       ` Michal Hocko
2013-03-19 10:14     ` Mel Gorman
2013-03-19 10:14       ` Mel Gorman
2013-03-19 10:26       ` Simon Jeons
2013-03-19 10:26         ` Simon Jeons
2013-03-19 11:01         ` Mel Gorman
2013-03-19 11:01           ` Mel Gorman
2013-03-21 14:54   ` Michal Hocko
2013-03-21 14:54     ` Michal Hocko
2013-03-21 15:26     ` Mel Gorman
2013-03-21 15:26       ` Mel Gorman
2013-03-21 15:38       ` Michal Hocko
2013-03-21 15:38         ` Michal Hocko
2013-03-17 13:04 ` [PATCH 04/10] mm: vmscan: Decide whether to compact the pgdat based on reclaim progress Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-18 11:11   ` Wanpeng Li
2013-03-18 11:11   ` Wanpeng Li
2013-03-19 10:19     ` Mel Gorman
2013-03-19 10:19       ` Mel Gorman
2013-03-18 11:35   ` Hillf Danton
2013-03-18 11:35     ` Hillf Danton
2013-03-19 10:27     ` Mel Gorman
2013-03-19 10:27       ` Mel Gorman
2013-03-21 15:32   ` Michal Hocko
2013-03-21 15:32     ` Michal Hocko
2013-03-21 15:47     ` Mel Gorman
2013-03-21 15:47       ` Mel Gorman
2013-03-21 15:50       ` Michal Hocko
2013-03-21 15:50         ` Michal Hocko
2013-03-17 13:04 ` [PATCH 05/10] mm: vmscan: Do not allow kswapd to scan at maximum priority Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-21  1:20   ` Rik van Riel
2013-03-21  1:20     ` Rik van Riel
2013-03-21 10:12     ` Mel Gorman
2013-03-21 10:12       ` Mel Gorman
2013-03-21 12:30       ` Rik van Riel
2013-03-21 12:30         ` Rik van Riel
2013-03-21 15:48   ` Michal Hocko
2013-03-21 15:48     ` Michal Hocko
2013-03-17 13:04 ` [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-17 14:42   ` Andi Kleen
2013-03-17 14:42     ` Andi Kleen
2013-03-17 15:11     ` Mel Gorman
2013-03-17 15:11       ` Mel Gorman
2013-03-21 17:53       ` Rik van Riel
2013-03-21 17:53         ` Rik van Riel
2013-03-21 18:15         ` Mel Gorman
2013-03-21 18:15           ` Mel Gorman
2013-03-21 18:21           ` Rik van Riel
2013-03-21 18:21             ` Rik van Riel
2013-03-18 11:08   ` Wanpeng Li
2013-03-18 11:08   ` Wanpeng Li
2013-03-19 10:35     ` Mel Gorman
2013-03-19 10:35       ` Mel Gorman
2013-03-17 13:04 ` [PATCH 07/10] mm: vmscan: Block kswapd if it is encountering pages under writeback Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-17 14:49   ` Andi Kleen
2013-03-17 14:49     ` Andi Kleen
2013-03-17 15:19     ` Mel Gorman
2013-03-17 15:19       ` Mel Gorman
2013-03-17 15:40       ` Andi Kleen
2013-03-17 15:40         ` Andi Kleen
2013-03-19 11:06         ` Mel Gorman
2013-03-19 11:06           ` Mel Gorman
2013-03-18 11:37   ` Simon Jeons
2013-03-18 11:37     ` Simon Jeons
2013-03-19 10:57     ` Mel Gorman
2013-03-19 10:57       ` Mel Gorman
2013-03-18 11:58   ` Wanpeng Li
2013-03-19 10:58     ` Mel Gorman
2013-03-19 10:58       ` Mel Gorman
2013-03-18 11:58   ` Wanpeng Li
2013-03-21 16:32   ` [PATCH 07/10 -v2r1] " Michal Hocko
2013-03-21 16:32     ` Michal Hocko
2013-03-21 18:42   ` [PATCH 07/10] " Rik van Riel
2013-03-21 18:42     ` Rik van Riel
2013-03-22  8:27     ` Mel Gorman
2013-03-22  8:27       ` Mel Gorman
2013-03-17 13:04 ` [PATCH 08/10] mm: vmscan: Have kswapd shrink slab only once per priority Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-17 14:53   ` Andi Kleen
2013-03-17 14:53     ` Andi Kleen
2013-03-21 16:47   ` Michal Hocko
2013-03-21 16:47     ` Michal Hocko
2013-03-21 19:47   ` Rik van Riel
2013-03-21 19:47     ` Rik van Riel
2013-04-09  6:53   ` Joonsoo Kim
2013-04-09  6:53     ` Joonsoo Kim
2013-04-09  8:41     ` Simon Jeons
2013-04-09  8:41       ` Simon Jeons
2013-04-09 11:13     ` Mel Gorman
2013-04-09 11:13       ` Mel Gorman
2013-04-10  1:07       ` Dave Chinner
2013-04-10  1:07         ` Dave Chinner
2013-04-10  5:23         ` Joonsoo Kim
2013-04-10  5:23           ` Joonsoo Kim
2013-04-11  9:53         ` Mel Gorman
2013-04-11  9:53           ` Mel Gorman
2013-04-10  5:21       ` Joonsoo Kim
2013-04-10  5:21         ` Joonsoo Kim
2013-04-11 10:01         ` Mel Gorman
2013-04-11 10:01           ` Mel Gorman
2013-04-11 10:29           ` Ric Mason
2013-04-11 10:29             ` Ric Mason
2013-03-17 13:04 ` [PATCH 09/10] mm: vmscan: Check if kswapd should writepage " Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-21 16:58   ` Michal Hocko
2013-03-21 16:58     ` Michal Hocko
2013-03-21 18:07     ` Mel Gorman
2013-03-21 18:07       ` Mel Gorman
2013-03-21 19:52   ` Rik van Riel
2013-03-21 19:52     ` Rik van Riel
2013-03-17 13:04 ` [PATCH 10/10] mm: vmscan: Move logic from balance_pgdat() to kswapd_shrink_zone() Mel Gorman
2013-03-17 13:04   ` Mel Gorman
2013-03-17 14:55   ` Andi Kleen
2013-03-17 14:55     ` Andi Kleen
2013-03-17 15:25     ` Mel Gorman
2013-03-17 15:25       ` Mel Gorman
2013-03-21 17:18   ` Michal Hocko
2013-03-21 17:18     ` Michal Hocko
2013-03-21 18:13     ` Mel Gorman
2013-03-21 18:13       ` Mel Gorman
2013-03-21 10:44 ` [RFC PATCH 0/8] Reduce system disruption due to kswapd Damien Wyart
2013-03-21 10:54   ` Zlatko Calusic
2013-03-21 11:48     ` Mel Gorman
2013-03-21 11:20   ` Mel Gorman
2013-03-22 14:37 ` Mel Gorman
2013-03-22 14:37   ` Mel Gorman
2013-03-24 19:00 ` Jiri Slaby
2013-03-24 19:00   ` Jiri Slaby
2013-03-25  8:17   ` Michal Hocko
2013-03-25  8:17     ` Michal Hocko
2013-04-09 11:06 [PATCH 0/10] Reduce system disruption due to kswapd V2 Mel Gorman
2013-04-09 11:06 ` [PATCH 02/10] mm: vmscan: Obey proportional scanning requirements for kswapd Mel Gorman
2013-04-09 11:06   ` Mel Gorman
2013-04-10  7:16   ` Kamezawa Hiroyuki
2013-04-10  7:16     ` Kamezawa Hiroyuki
2013-04-10 14:08     ` Mel Gorman
2013-04-10 14:08       ` Mel Gorman
2013-04-11  0:14       ` Kamezawa Hiroyuki
2013-04-11  0:14         ` Kamezawa Hiroyuki
2013-04-11  9:09         ` Mel Gorman
2013-04-11  9:09           ` Mel Gorman
2013-04-11 19:57 [PATCH 0/10] Reduce system disruption due to kswapd V3 Mel Gorman
2013-04-11 19:57 ` [PATCH 02/10] mm: vmscan: Obey proportional scanning requirements for kswapd Mel Gorman
2013-04-11 19:57   ` Mel Gorman
2013-04-18 15:01   ` Johannes Weiner
2013-04-18 15:01     ` Johannes Weiner
2013-04-18 15:58     ` Mel Gorman
2013-04-18 15:58       ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130322190902.GA4611@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=Valdis.Kletnieks@vt.edu \
    --cc=dormando@rydia.net \
    --cc=jslaby@suse.cz \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.cz \
    --cc=riel@redhat.com \
    --cc=satoru.moriya@hds.com \
    --cc=zcalusic@bitsync.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.