linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [v4 PATCH 1/2] mm: vmscan: remove double slab pressure by inc'ing sc->nr_scanned
@ 2019-05-23  2:27 Yang Shi
  2019-05-23  2:27 ` [v4 PATCH 2/2] mm: vmscan: correct some vmscan counters for THP swapout Yang Shi
  0 siblings, 1 reply; 6+ messages in thread
From: Yang Shi @ 2019-05-23  2:27 UTC (permalink / raw)
  To: ying.huang, hannes, mhocko, mgorman, kirill.shutemov, josef,
	hughd, shakeelb, akpm
  Cc: yang.shi, linux-mm, linux-kernel

The commit 9092c71bb724 ("mm: use sc->priority for slab shrink targets")
has broken up the relationship between sc->nr_scanned and slab pressure.
The sc->nr_scanned can't double slab pressure anymore.  So, it sounds no
sense to still keep sc->nr_scanned inc'ed.  Actually, it would prevent
from adding pressure on slab shrink since excessive sc->nr_scanned would
prevent from scan->priority raise.

The bonnie test doesn't show this would change the behavior of
slab shrinkers.

				w/		w/o
			  /sec    %CP      /sec      %CP
Sequential delete: 	3960.6    94.6    3997.6     96.2
Random delete: 		2518      63.8    2561.6     64.6

The slight increase of "/sec" without the patch would be caused by the
slight increase of CPU usage.

Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Michal Hocko <mhocko@kernel.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
v4: Added Johannes's ack

 mm/vmscan.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7acd0af..b65bc50 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1137,11 +1137,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 		if (!sc->may_unmap && page_mapped(page))
 			goto keep_locked;
 
-		/* Double the slab pressure for mapped and swapcache pages */
-		if ((page_mapped(page) || PageSwapCache(page)) &&
-		    !(PageAnon(page) && !PageSwapBacked(page)))
-			sc->nr_scanned++;
-
 		may_enter_fs = (sc->gfp_mask & __GFP_FS) ||
 			(PageSwapCache(page) && (sc->gfp_mask & __GFP_IO));
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread
* Re: [v4 PATCH 2/2] mm: vmscan: correct some vmscan counters for THP swapout
@ 2019-05-23 15:51 Hillf Danton
  2019-05-24  1:26 ` Yang Shi
  0 siblings, 1 reply; 6+ messages in thread
From: Hillf Danton @ 2019-05-23 15:51 UTC (permalink / raw)
  To: Yang Shi
  Cc: ying.huang, hannes, mhocko, mgorman, kirill.shutemov, josef,
	hughd, shakeelb, akpm, linux-mm, linux-kernel


On Thu, 23 May 2019 10:27:38 +0800 Yang Shi wrote:
> 
> @ -1642,14 +1650,14 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  	unsigned long nr_zone_taken[MAX_NR_ZONES] = { 0 };
>  	unsigned long nr_skipped[MAX_NR_ZONES] = { 0, };
>  	unsigned long skipped = 0;
> -	unsigned long scan, total_scan, nr_pages;
> +	unsigned long scan, total_scan;
> +	unsigned long nr_pages;
Change for no earn:)

>  	LIST_HEAD(pages_skipped);
>  	isolate_mode_t mode = (sc->may_unmap ? 0 : ISOLATE_UNMAPPED);
>  
> +	total_scan = 0;
>  	scan = 0;
> -	for (total_scan = 0;
> -	     scan < nr_to_scan && nr_taken < nr_to_scan && !list_empty(src);
> -	     total_scan++) {
> +	while (scan < nr_to_scan && !list_empty(src)) {
>  		struct page *page;
AFAICS scan currently prevents us from looping for ever, while nr_taken bails
us out once we get what's expected, so I doubt it makes much sense to cut
nr_taken off.
>  
>  		page = lru_to_page(src);
> @@ -1657,9 +1665,12 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  
>  		VM_BUG_ON_PAGE(!PageLRU(page), page);
>  
> +		nr_pages = 1 << compound_order(page);
> +		total_scan += nr_pages;
> +
>  		if (page_zonenum(page) > sc->reclaim_idx) {
>  			list_move(&page->lru, &pages_skipped);
> -			nr_skipped[page_zonenum(page)]++;
> +			nr_skipped[page_zonenum(page)] += nr_pages;
>  			continue;
>  		}
>  
> @@ -1669,10 +1680,9 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  		 * ineligible pages.  This causes the VM to not reclaim any
>  		 * pages, triggering a premature OOM.
>  		 */
> -		scan++;
> +		scan += nr_pages;
The comment looks to defy the change if we fail to add a huge page to
the dst list; otherwise nr_taken knows how to do the right thing. What
I prefer is to let scan to do one thing a time.

>  		switch (__isolate_lru_page(page, mode)) {
>  		case 0:
> -			nr_pages = hpage_nr_pages(page);
>  			nr_taken += nr_pages;
>  			nr_zone_taken[page_zonenum(page)] += nr_pages;
>  			list_move(&page->lru, dst);
> -- 
> 1.8.3.1
> 
Best Regards
Hillf


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-05-24  1:27 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-23  2:27 [v4 PATCH 1/2] mm: vmscan: remove double slab pressure by inc'ing sc->nr_scanned Yang Shi
2019-05-23  2:27 ` [v4 PATCH 2/2] mm: vmscan: correct some vmscan counters for THP swapout Yang Shi
2019-05-23 12:52   ` Huang, Ying
2019-05-23 13:55     ` Yang Shi
2019-05-23 15:51 Hillf Danton
2019-05-24  1:26 ` Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).