* [PATCH 0/3] Follow-up fixes to node-lru series v3
@ 2016-07-18 14:50 Mel Gorman
2016-07-18 14:50 ` [PATCH 1/3] mm, vmscan: Remove redundant check in shrink_zones() Mel Gorman
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Mel Gorman @ 2016-07-18 14:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Johannes Weiner, Minchan Kim, Vlastimil Babka, Linux-MM, LKML,
Mel Gorman
This is another round of fixups to the node-lru series. The most important
patch is the last one which deals with a highmem accounting issue.
include/linux/mm_inline.h | 8 ++------
mm/vmscan.c | 25 +++++++++++--------------
2 files changed, 13 insertions(+), 20 deletions(-)
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] mm, vmscan: Remove redundant check in shrink_zones()
2016-07-18 14:50 [PATCH 0/3] Follow-up fixes to node-lru series v3 Mel Gorman
@ 2016-07-18 14:50 ` Mel Gorman
2016-07-18 16:11 ` Johannes Weiner
2016-07-18 23:54 ` Minchan Kim
2016-07-18 14:50 ` [PATCH 2/3] mm, vmscan: Release/reacquire lru_lock on pgdat change Mel Gorman
` (2 subsequent siblings)
3 siblings, 2 replies; 11+ messages in thread
From: Mel Gorman @ 2016-07-18 14:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Johannes Weiner, Minchan Kim, Vlastimil Babka, Linux-MM, LKML,
Mel Gorman
As pointed out by Minchan Kim, shrink_zones() checks for populated
zones in a zonelist but a zonelist can never contain unpopulated
zones. While it's not related to the node-lru series, it can be
cleaned up now.
Suggested-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/vmscan.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3f06a7a0d135..45344acf52ba 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2605,9 +2605,6 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
for_each_zone_zonelist_nodemask(zone, z, zonelist,
sc->reclaim_idx, sc->nodemask) {
- if (!populated_zone(zone))
- continue;
-
/*
* Take care memory controller reclaiming has small influence
* to global LRU.
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/3] mm, vmscan: Release/reacquire lru_lock on pgdat change
2016-07-18 14:50 [PATCH 0/3] Follow-up fixes to node-lru series v3 Mel Gorman
2016-07-18 14:50 ` [PATCH 1/3] mm, vmscan: Remove redundant check in shrink_zones() Mel Gorman
@ 2016-07-18 14:50 ` Mel Gorman
2016-07-18 16:13 ` Johannes Weiner
2016-07-18 23:58 ` Minchan Kim
2016-07-18 14:50 ` [PATCH 3/3] mm, vmstat: remove zone and node double accounting by approximating retries -fix Mel Gorman
2016-07-18 20:02 ` [PATCH 0/3] Follow-up fixes to node-lru series v3 Johannes Weiner
3 siblings, 2 replies; 11+ messages in thread
From: Mel Gorman @ 2016-07-18 14:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Johannes Weiner, Minchan Kim, Vlastimil Babka, Linux-MM, LKML,
Mel Gorman
With node-lru, the locking is based on the pgdat. As Minchan pointed
out, there is an opportunity to reduce LRU lock release/acquire in
check_move_unevictable_pages by only changing lock on a pgdat change.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/vmscan.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 45344acf52ba..a6f31617a08c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3775,24 +3775,24 @@ int page_evictable(struct page *page)
void check_move_unevictable_pages(struct page **pages, int nr_pages)
{
struct lruvec *lruvec;
- struct zone *zone = NULL;
+ struct pglist_data *pgdat = NULL;
int pgscanned = 0;
int pgrescued = 0;
int i;
for (i = 0; i < nr_pages; i++) {
struct page *page = pages[i];
- struct zone *pagezone;
+ struct pglist_data *pagepgdat = page_pgdat(page);
pgscanned++;
- pagezone = page_zone(page);
- if (pagezone != zone) {
- if (zone)
- spin_unlock_irq(zone_lru_lock(zone));
- zone = pagezone;
- spin_lock_irq(zone_lru_lock(zone));
+ pagepgdat = page_pgdat(page);
+ if (pagepgdat != pgdat) {
+ if (pgdat)
+ spin_unlock_irq(&pgdat->lru_lock);
+ pgdat = pagepgdat;
+ spin_lock_irq(&pgdat->lru_lock);
}
- lruvec = mem_cgroup_page_lruvec(page, zone->zone_pgdat);
+ lruvec = mem_cgroup_page_lruvec(page, pgdat);
if (!PageLRU(page) || !PageUnevictable(page))
continue;
@@ -3808,10 +3808,10 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
}
}
- if (zone) {
+ if (pgdat) {
__count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued);
__count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned);
- spin_unlock_irq(zone_lru_lock(zone));
+ spin_unlock_irq(&pgdat->lru_lock);
}
}
#endif /* CONFIG_SHMEM */
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/3] mm, vmstat: remove zone and node double accounting by approximating retries -fix
2016-07-18 14:50 [PATCH 0/3] Follow-up fixes to node-lru series v3 Mel Gorman
2016-07-18 14:50 ` [PATCH 1/3] mm, vmscan: Remove redundant check in shrink_zones() Mel Gorman
2016-07-18 14:50 ` [PATCH 2/3] mm, vmscan: Release/reacquire lru_lock on pgdat change Mel Gorman
@ 2016-07-18 14:50 ` Mel Gorman
2016-07-18 16:14 ` Johannes Weiner
2016-07-18 23:59 ` Minchan Kim
2016-07-18 20:02 ` [PATCH 0/3] Follow-up fixes to node-lru series v3 Johannes Weiner
3 siblings, 2 replies; 11+ messages in thread
From: Mel Gorman @ 2016-07-18 14:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Johannes Weiner, Minchan Kim, Vlastimil Babka, Linux-MM, LKML,
Mel Gorman
As pointed out by Vlastimil, the atomic_add() functions are already assumed
to be able to handle negative numbers. The atomic_sub handling was wrong
anyway but this patch fixes it unconditionally.
This is a fix to the mmotm patch
mm-vmstat-remove-zone-and-node-double-accounting-by-approximating-retries.patch
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
include/linux/mm_inline.h | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index d29237428199..bcc4ed07fa90 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -10,12 +10,8 @@ extern atomic_t highmem_file_pages;
static inline void acct_highmem_file_pages(int zid, enum lru_list lru,
int nr_pages)
{
- if (is_highmem_idx(zid) && is_file_lru(lru)) {
- if (nr_pages > 0)
- atomic_add(nr_pages, &highmem_file_pages);
- else
- atomic_sub(nr_pages, &highmem_file_pages);
- }
+ if (is_highmem_idx(zid) && is_file_lru(lru))
+ atomic_add(nr_pages, &highmem_file_pages);
}
#else
static inline void acct_highmem_file_pages(int zid, enum lru_list lru,
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] mm, vmscan: Remove redundant check in shrink_zones()
2016-07-18 14:50 ` [PATCH 1/3] mm, vmscan: Remove redundant check in shrink_zones() Mel Gorman
@ 2016-07-18 16:11 ` Johannes Weiner
2016-07-18 23:54 ` Minchan Kim
1 sibling, 0 replies; 11+ messages in thread
From: Johannes Weiner @ 2016-07-18 16:11 UTC (permalink / raw)
To: Mel Gorman; +Cc: Andrew Morton, Minchan Kim, Vlastimil Babka, Linux-MM, LKML
On Mon, Jul 18, 2016 at 03:50:24PM +0100, Mel Gorman wrote:
> As pointed out by Minchan Kim, shrink_zones() checks for populated
> zones in a zonelist but a zonelist can never contain unpopulated
> zones. While it's not related to the node-lru series, it can be
> cleaned up now.
>
> Suggested-by: Minchan Kim <minchan@kernel.org>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Ha, I didn't know that. But yeah, the zonelist building code excludes
unpopulated zones from the start. Neat.
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] mm, vmscan: Release/reacquire lru_lock on pgdat change
2016-07-18 14:50 ` [PATCH 2/3] mm, vmscan: Release/reacquire lru_lock on pgdat change Mel Gorman
@ 2016-07-18 16:13 ` Johannes Weiner
2016-07-18 23:58 ` Minchan Kim
1 sibling, 0 replies; 11+ messages in thread
From: Johannes Weiner @ 2016-07-18 16:13 UTC (permalink / raw)
To: Mel Gorman; +Cc: Andrew Morton, Minchan Kim, Vlastimil Babka, Linux-MM, LKML
On Mon, Jul 18, 2016 at 03:50:25PM +0100, Mel Gorman wrote:
> With node-lru, the locking is based on the pgdat. As Minchan pointed
> out, there is an opportunity to reduce LRU lock release/acquire in
> check_move_unevictable_pages by only changing lock on a pgdat change.
>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] mm, vmstat: remove zone and node double accounting by approximating retries -fix
2016-07-18 14:50 ` [PATCH 3/3] mm, vmstat: remove zone and node double accounting by approximating retries -fix Mel Gorman
@ 2016-07-18 16:14 ` Johannes Weiner
2016-07-18 23:59 ` Minchan Kim
1 sibling, 0 replies; 11+ messages in thread
From: Johannes Weiner @ 2016-07-18 16:14 UTC (permalink / raw)
To: Mel Gorman; +Cc: Andrew Morton, Minchan Kim, Vlastimil Babka, Linux-MM, LKML
On Mon, Jul 18, 2016 at 03:50:26PM +0100, Mel Gorman wrote:
> As pointed out by Vlastimil, the atomic_add() functions are already assumed
> to be able to handle negative numbers. The atomic_sub handling was wrong
> anyway but this patch fixes it unconditionally.
>
> This is a fix to the mmotm patch
> mm-vmstat-remove-zone-and-node-double-accounting-by-approximating-retries.patch
>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] Follow-up fixes to node-lru series v3
2016-07-18 14:50 [PATCH 0/3] Follow-up fixes to node-lru series v3 Mel Gorman
` (2 preceding siblings ...)
2016-07-18 14:50 ` [PATCH 3/3] mm, vmstat: remove zone and node double accounting by approximating retries -fix Mel Gorman
@ 2016-07-18 20:02 ` Johannes Weiner
3 siblings, 0 replies; 11+ messages in thread
From: Johannes Weiner @ 2016-07-18 20:02 UTC (permalink / raw)
To: Mel Gorman; +Cc: Andrew Morton, Minchan Kim, Vlastimil Babka, Linux-MM, LKML
The v3 is a bit misleading. It's on top of, not instead of, the v2
series with the same name sent out previously. We need both series.
On Mon, Jul 18, 2016 at 03:50:23PM +0100, Mel Gorman wrote:
> This is another round of fixups to the node-lru series. The most important
> patch is the last one which deals with a highmem accounting issue.
>
> include/linux/mm_inline.h | 8 ++------
> mm/vmscan.c | 25 +++++++++++--------------
> 2 files changed, 13 insertions(+), 20 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] mm, vmscan: Remove redundant check in shrink_zones()
2016-07-18 14:50 ` [PATCH 1/3] mm, vmscan: Remove redundant check in shrink_zones() Mel Gorman
2016-07-18 16:11 ` Johannes Weiner
@ 2016-07-18 23:54 ` Minchan Kim
1 sibling, 0 replies; 11+ messages in thread
From: Minchan Kim @ 2016-07-18 23:54 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Johannes Weiner, Vlastimil Babka, Linux-MM, LKML
On Mon, Jul 18, 2016 at 03:50:24PM +0100, Mel Gorman wrote:
> As pointed out by Minchan Kim, shrink_zones() checks for populated
> zones in a zonelist but a zonelist can never contain unpopulated
> zones. While it's not related to the node-lru series, it can be
> cleaned up now.
>
> Suggested-by: Minchan Kim <minchan@kernel.org>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Minchan Kim <minchan@kernel.org>
Thanks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] mm, vmscan: Release/reacquire lru_lock on pgdat change
2016-07-18 14:50 ` [PATCH 2/3] mm, vmscan: Release/reacquire lru_lock on pgdat change Mel Gorman
2016-07-18 16:13 ` Johannes Weiner
@ 2016-07-18 23:58 ` Minchan Kim
1 sibling, 0 replies; 11+ messages in thread
From: Minchan Kim @ 2016-07-18 23:58 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Johannes Weiner, Vlastimil Babka, Linux-MM, LKML
On Mon, Jul 18, 2016 at 03:50:25PM +0100, Mel Gorman wrote:
> With node-lru, the locking is based on the pgdat. As Minchan pointed
> out, there is an opportunity to reduce LRU lock release/acquire in
> check_move_unevictable_pages by only changing lock on a pgdat change.
>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
> mm/vmscan.c | 22 +++++++++++-----------
> 1 file changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 45344acf52ba..a6f31617a08c 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3775,24 +3775,24 @@ int page_evictable(struct page *page)
> void check_move_unevictable_pages(struct page **pages, int nr_pages)
> {
> struct lruvec *lruvec;
> - struct zone *zone = NULL;
> + struct pglist_data *pgdat = NULL;
> int pgscanned = 0;
> int pgrescued = 0;
> int i;
>
> for (i = 0; i < nr_pages; i++) {
> struct page *page = pages[i];
> - struct zone *pagezone;
> + struct pglist_data *pagepgdat = page_pgdat(page);
No need to initialize in here.
>
> pgscanned++;
> - pagezone = page_zone(page);
> - if (pagezone != zone) {
> - if (zone)
> - spin_unlock_irq(zone_lru_lock(zone));
> - zone = pagezone;
> - spin_lock_irq(zone_lru_lock(zone));
> + pagepgdat = page_pgdat(page);
Double initialize. Please remove either one.
> + if (pagepgdat != pgdat) {
> + if (pgdat)
> + spin_unlock_irq(&pgdat->lru_lock);
> + pgdat = pagepgdat;
> + spin_lock_irq(&pgdat->lru_lock);
> }
> - lruvec = mem_cgroup_page_lruvec(page, zone->zone_pgdat);
> + lruvec = mem_cgroup_page_lruvec(page, pgdat);
>
> if (!PageLRU(page) || !PageUnevictable(page))
> continue;
> @@ -3808,10 +3808,10 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
> }
> }
>
> - if (zone) {
> + if (pgdat) {
> __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued);
> __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned);
> - spin_unlock_irq(zone_lru_lock(zone));
> + spin_unlock_irq(&pgdat->lru_lock);
> }
> }
> #endif /* CONFIG_SHMEM */
> --
> 2.6.4
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] mm, vmstat: remove zone and node double accounting by approximating retries -fix
2016-07-18 14:50 ` [PATCH 3/3] mm, vmstat: remove zone and node double accounting by approximating retries -fix Mel Gorman
2016-07-18 16:14 ` Johannes Weiner
@ 2016-07-18 23:59 ` Minchan Kim
1 sibling, 0 replies; 11+ messages in thread
From: Minchan Kim @ 2016-07-18 23:59 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Johannes Weiner, Vlastimil Babka, Linux-MM, LKML
On Mon, Jul 18, 2016 at 03:50:26PM +0100, Mel Gorman wrote:
> As pointed out by Vlastimil, the atomic_add() functions are already assumed
> to be able to handle negative numbers. The atomic_sub handling was wrong
> anyway but this patch fixes it unconditionally.
>
> This is a fix to the mmotm patch
> mm-vmstat-remove-zone-and-node-double-accounting-by-approximating-retries.patch
>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Minchan Kim <minchan@kernel.org>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2016-07-18 23:59 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-18 14:50 [PATCH 0/3] Follow-up fixes to node-lru series v3 Mel Gorman
2016-07-18 14:50 ` [PATCH 1/3] mm, vmscan: Remove redundant check in shrink_zones() Mel Gorman
2016-07-18 16:11 ` Johannes Weiner
2016-07-18 23:54 ` Minchan Kim
2016-07-18 14:50 ` [PATCH 2/3] mm, vmscan: Release/reacquire lru_lock on pgdat change Mel Gorman
2016-07-18 16:13 ` Johannes Weiner
2016-07-18 23:58 ` Minchan Kim
2016-07-18 14:50 ` [PATCH 3/3] mm, vmstat: remove zone and node double accounting by approximating retries -fix Mel Gorman
2016-07-18 16:14 ` Johannes Weiner
2016-07-18 23:59 ` Minchan Kim
2016-07-18 20:02 ` [PATCH 0/3] Follow-up fixes to node-lru series v3 Johannes Weiner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).