From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756687Ab1GMQlR (ORCPT ); Wed, 13 Jul 2011 12:41:17 -0400 Received: from mx1.redhat.com ([209.132.183.28]:10920 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756215Ab1GMQlP (ORCPT ); Wed, 13 Jul 2011 12:41:15 -0400 Date: Wed, 13 Jul 2011 18:40:40 +0200 From: Johannes Weiner To: Mel Gorman Cc: Linux-MM , LKML , XFS , Dave Chinner , Christoph Hellwig , Wu Fengguang , Jan Kara , Rik van Riel , Minchan Kim Subject: Re: [PATCH 4/5] mm: vmscan: Immediately reclaim end-of-LRU dirty pages when writeback completes Message-ID: <20110713164040.GA13972@redhat.com> References: <1310567487-15367-1-git-send-email-mgorman@suse.de> <1310567487-15367-5-git-send-email-mgorman@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1310567487-15367-5-git-send-email-mgorman@suse.de> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 13, 2011 at 03:31:26PM +0100, Mel Gorman wrote: > When direct reclaim encounters a dirty page, it gets recycled around > the LRU for another cycle. This patch marks the page PageReclaim using > deactivate_page() so that the page gets reclaimed almost immediately > after the page gets cleaned. This is to avoid reclaiming clean pages > that are younger than a dirty page encountered at the end of the LRU > that might have been something like a use-once page. > > Signed-off-by: Mel Gorman > --- > include/linux/mmzone.h | 2 +- > mm/vmscan.c | 10 ++++++++-- > mm/vmstat.c | 2 +- > 3 files changed, 10 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index c4508a2..bea7858 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -100,7 +100,7 @@ enum zone_stat_item { > NR_UNSTABLE_NFS, /* NFS unstable pages */ > NR_BOUNCE, > NR_VMSCAN_WRITE, > - NR_VMSCAN_WRITE_SKIP, > + NR_VMSCAN_INVALIDATE, > NR_VMSCAN_THROTTLED, > NR_WRITEBACK_TEMP, /* Writeback using temporary buffers */ > NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */ > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 9826086..8e00aee 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -834,8 +834,13 @@ static unsigned long shrink_page_list(struct list_head *page_list, > */ > if (page_is_file_cache(page) && > (!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) { > - inc_zone_page_state(page, NR_VMSCAN_WRITE_SKIP); > - goto keep_locked; > + inc_zone_page_state(page, NR_VMSCAN_INVALIDATE); > + > + /* Immediately reclaim when written back */ > + unlock_page(page); > + deactivate_page(page); > + > + goto keep_dirty; > } > > if (references == PAGEREF_RECLAIM_CLEAN) > @@ -956,6 +961,7 @@ keep: > reset_reclaim_mode(sc); > keep_lumpy: > list_add(&page->lru, &ret_pages); > +keep_dirty: > VM_BUG_ON(PageLRU(page) || PageUnevictable(page)); > } I really like the idea behind this patch, but I think all those pages are lost as PageLRU is cleared on isolation and lru_deactivate_fn bails on them in turn. If I'm not mistaken, the reference from the isolation is also leaked. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id p6DGf6w5092515 for ; Wed, 13 Jul 2011 11:41:06 -0500 Received: from mx1.redhat.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 345511501DC7 for ; Wed, 13 Jul 2011 09:41:04 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by cuda.sgi.com with ESMTP id rQJOkBWsCnxgl6Jw for ; Wed, 13 Jul 2011 09:41:04 -0700 (PDT) Date: Wed, 13 Jul 2011 18:40:40 +0200 From: Johannes Weiner Subject: Re: [PATCH 4/5] mm: vmscan: Immediately reclaim end-of-LRU dirty pages when writeback completes Message-ID: <20110713164040.GA13972@redhat.com> References: <1310567487-15367-1-git-send-email-mgorman@suse.de> <1310567487-15367-5-git-send-email-mgorman@suse.de> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1310567487-15367-5-git-send-email-mgorman@suse.de> List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: Mel Gorman Cc: Rik van Riel , Jan Kara , LKML , XFS , Christoph Hellwig , Linux-MM , Minchan Kim , Wu Fengguang On Wed, Jul 13, 2011 at 03:31:26PM +0100, Mel Gorman wrote: > When direct reclaim encounters a dirty page, it gets recycled around > the LRU for another cycle. This patch marks the page PageReclaim using > deactivate_page() so that the page gets reclaimed almost immediately > after the page gets cleaned. This is to avoid reclaiming clean pages > that are younger than a dirty page encountered at the end of the LRU > that might have been something like a use-once page. > > Signed-off-by: Mel Gorman > --- > include/linux/mmzone.h | 2 +- > mm/vmscan.c | 10 ++++++++-- > mm/vmstat.c | 2 +- > 3 files changed, 10 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index c4508a2..bea7858 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -100,7 +100,7 @@ enum zone_stat_item { > NR_UNSTABLE_NFS, /* NFS unstable pages */ > NR_BOUNCE, > NR_VMSCAN_WRITE, > - NR_VMSCAN_WRITE_SKIP, > + NR_VMSCAN_INVALIDATE, > NR_VMSCAN_THROTTLED, > NR_WRITEBACK_TEMP, /* Writeback using temporary buffers */ > NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */ > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 9826086..8e00aee 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -834,8 +834,13 @@ static unsigned long shrink_page_list(struct list_head *page_list, > */ > if (page_is_file_cache(page) && > (!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) { > - inc_zone_page_state(page, NR_VMSCAN_WRITE_SKIP); > - goto keep_locked; > + inc_zone_page_state(page, NR_VMSCAN_INVALIDATE); > + > + /* Immediately reclaim when written back */ > + unlock_page(page); > + deactivate_page(page); > + > + goto keep_dirty; > } > > if (references == PAGEREF_RECLAIM_CLEAN) > @@ -956,6 +961,7 @@ keep: > reset_reclaim_mode(sc); > keep_lumpy: > list_add(&page->lru, &ret_pages); > +keep_dirty: > VM_BUG_ON(PageLRU(page) || PageUnevictable(page)); > } I really like the idea behind this patch, but I think all those pages are lost as PageLRU is cleared on isolation and lru_deactivate_fn bails on them in turn. If I'm not mistaken, the reference from the isolation is also leaked. _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail6.bemta12.messagelabs.com (mail6.bemta12.messagelabs.com [216.82.250.247]) by kanga.kvack.org (Postfix) with ESMTP id 29F5F9000C1 for ; Wed, 13 Jul 2011 12:41:05 -0400 (EDT) Date: Wed, 13 Jul 2011 18:40:40 +0200 From: Johannes Weiner Subject: Re: [PATCH 4/5] mm: vmscan: Immediately reclaim end-of-LRU dirty pages when writeback completes Message-ID: <20110713164040.GA13972@redhat.com> References: <1310567487-15367-1-git-send-email-mgorman@suse.de> <1310567487-15367-5-git-send-email-mgorman@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1310567487-15367-5-git-send-email-mgorman@suse.de> Sender: owner-linux-mm@kvack.org List-ID: To: Mel Gorman Cc: Linux-MM , LKML , XFS , Dave Chinner , Christoph Hellwig , Wu Fengguang , Jan Kara , Rik van Riel , Minchan Kim On Wed, Jul 13, 2011 at 03:31:26PM +0100, Mel Gorman wrote: > When direct reclaim encounters a dirty page, it gets recycled around > the LRU for another cycle. This patch marks the page PageReclaim using > deactivate_page() so that the page gets reclaimed almost immediately > after the page gets cleaned. This is to avoid reclaiming clean pages > that are younger than a dirty page encountered at the end of the LRU > that might have been something like a use-once page. > > Signed-off-by: Mel Gorman > --- > include/linux/mmzone.h | 2 +- > mm/vmscan.c | 10 ++++++++-- > mm/vmstat.c | 2 +- > 3 files changed, 10 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index c4508a2..bea7858 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -100,7 +100,7 @@ enum zone_stat_item { > NR_UNSTABLE_NFS, /* NFS unstable pages */ > NR_BOUNCE, > NR_VMSCAN_WRITE, > - NR_VMSCAN_WRITE_SKIP, > + NR_VMSCAN_INVALIDATE, > NR_VMSCAN_THROTTLED, > NR_WRITEBACK_TEMP, /* Writeback using temporary buffers */ > NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */ > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 9826086..8e00aee 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -834,8 +834,13 @@ static unsigned long shrink_page_list(struct list_head *page_list, > */ > if (page_is_file_cache(page) && > (!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) { > - inc_zone_page_state(page, NR_VMSCAN_WRITE_SKIP); > - goto keep_locked; > + inc_zone_page_state(page, NR_VMSCAN_INVALIDATE); > + > + /* Immediately reclaim when written back */ > + unlock_page(page); > + deactivate_page(page); > + > + goto keep_dirty; > } > > if (references == PAGEREF_RECLAIM_CLEAN) > @@ -956,6 +961,7 @@ keep: > reset_reclaim_mode(sc); > keep_lumpy: > list_add(&page->lru, &ret_pages); > +keep_dirty: > VM_BUG_ON(PageLRU(page) || PageUnevictable(page)); > } I really like the idea behind this patch, but I think all those pages are lost as PageLRU is cleared on isolation and lru_deactivate_fn bails on them in turn. If I'm not mistaken, the reference from the isolation is also leaked. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org