All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-rt-devel:linux-5.4.y-rt-rebase 113/253] mm/swap.c:348:9: sparse: sparse: context imbalance in 'rotate_reclaimable_page' - different lock contexts for basic block
@ 2020-01-28 10:26 kbuild test robot
  2020-02-04 11:29 ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 2+ messages in thread
From: kbuild test robot @ 2020-01-28 10:26 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 16203 bytes --]

Hi Thomas,

First bad commit (maybe != root cause):

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-5.4.y-rt-rebase
head:   5897a52fd4fdb49ac156ecf09507d0b5a27eb05c
commit: 124a2170d0d899fe104f02e462bc7c70c9cb4492 [113/253] preempt: Provide preempt_*_(no)rt variants
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.1-153-g47b6dfef-dirty
        git checkout 124a2170d0d899fe104f02e462bc7c70c9cb4492
        make ARCH=x86_64 allmodconfig
        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

   include/linux/spinlock.h:393:9: sparse: sparse: context imbalance in 'pagevec_lru_move_fn' - unexpected unlock
>> mm/swap.c:348:9: sparse: sparse: context imbalance in 'rotate_reclaimable_page' - different lock contexts for basic block
>> mm/swap.c:440:13: sparse: sparse: context imbalance in '__lru_cache_activate_page' - different lock contexts for basic block
>> mm/swap.c:505:13: sparse: sparse: context imbalance in '__lru_cache_add' - different lock contexts for basic block
>> mm/swap.c:695:6: sparse: sparse: context imbalance in 'lru_add_drain_cpu' - different lock contexts for basic block
>> mm/swap.c:743:6: sparse: sparse: context imbalance in 'deactivate_file_page' - different lock contexts for basic block
>> mm/swap.c:775:9: sparse: sparse: context imbalance in 'deactivate_page' - different lock contexts for basic block
>> mm/swap.c:801:9: sparse: sparse: context imbalance in 'mark_page_lazyfree' - different lock contexts for basic block

vim +/rotate_reclaimable_page +348 mm/swap.c

902aaed0d983df Hisashi Hifumi     2007-10-16  340  
^1da177e4c3f41 Linus Torvalds     2005-04-16  341  /*
^1da177e4c3f41 Linus Torvalds     2005-04-16  342   * Writeback is about to end against a page which has been marked for immediate
^1da177e4c3f41 Linus Torvalds     2005-04-16  343   * reclaim.  If it still appears to be reclaimable, move it to the tail of the
902aaed0d983df Hisashi Hifumi     2007-10-16  344   * inactive list.
^1da177e4c3f41 Linus Torvalds     2005-04-16  345   */
ac6aadb24b7d4f Miklos Szeredi     2008-04-28  346  void rotate_reclaimable_page(struct page *page)
^1da177e4c3f41 Linus Torvalds     2005-04-16  347  {
c55e8d035b28b2 Johannes Weiner    2017-02-24 @348  	if (!PageLocked(page) && !PageDirty(page) &&
894bc310419ac9 Lee Schermerhorn   2008-10-18  349  	    !PageUnevictable(page) && PageLRU(page)) {
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  350  		struct swap_pagevec *swpvec;
902aaed0d983df Hisashi Hifumi     2007-10-16  351  		struct pagevec *pvec;
^1da177e4c3f41 Linus Torvalds     2005-04-16  352  		unsigned long flags;
^1da177e4c3f41 Linus Torvalds     2005-04-16  353  
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  354  		get_page(page);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  355  
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  356  		swpvec = lock_swap_pvec_irqsave(&lru_rotate_pvecs, &flags);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  357  		pvec = &swpvec->pvec;
8f182270dfec43 Lukasz Odzioba     2016-06-24  358  		if (!pagevec_add(pvec, page) || PageCompound(page))
902aaed0d983df Hisashi Hifumi     2007-10-16  359  			pagevec_move_tail(pvec);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  360  		unlock_swap_pvec_irqrestore(swpvec, flags);
ac6aadb24b7d4f Miklos Szeredi     2008-04-28  361  	}
^1da177e4c3f41 Linus Torvalds     2005-04-16  362  }
^1da177e4c3f41 Linus Torvalds     2005-04-16  363  
fa9add641b1b1c Hugh Dickins       2012-05-29  364  static void update_page_reclaim_stat(struct lruvec *lruvec,
3e2f41f1f64744 KOSAKI Motohiro    2009-01-07  365  				     int file, int rotated)
3e2f41f1f64744 KOSAKI Motohiro    2009-01-07  366  {
fa9add641b1b1c Hugh Dickins       2012-05-29  367  	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
3e2f41f1f64744 KOSAKI Motohiro    2009-01-07  368  
3e2f41f1f64744 KOSAKI Motohiro    2009-01-07  369  	reclaim_stat->recent_scanned[file]++;
3e2f41f1f64744 KOSAKI Motohiro    2009-01-07  370  	if (rotated)
3e2f41f1f64744 KOSAKI Motohiro    2009-01-07  371  		reclaim_stat->recent_rotated[file]++;
3e2f41f1f64744 KOSAKI Motohiro    2009-01-07  372  }
3e2f41f1f64744 KOSAKI Motohiro    2009-01-07  373  
fa9add641b1b1c Hugh Dickins       2012-05-29  374  static void __activate_page(struct page *page, struct lruvec *lruvec,
fa9add641b1b1c Hugh Dickins       2012-05-29  375  			    void *arg)
^1da177e4c3f41 Linus Torvalds     2005-04-16  376  {
7a608572a282a7 Linus Torvalds     2011-01-17  377  	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
4f98a2fee8acdb Rik van Riel       2008-10-18  378  		int file = page_is_file_cache(page);
401a8e1c167008 Johannes Weiner    2009-09-21  379  		int lru = page_lru_base_type(page);
4f98a2fee8acdb Rik van Riel       2008-10-18  380  
fa9add641b1b1c Hugh Dickins       2012-05-29  381  		del_page_from_lru_list(page, lruvec, lru);
^1da177e4c3f41 Linus Torvalds     2005-04-16  382  		SetPageActive(page);
4f98a2fee8acdb Rik van Riel       2008-10-18  383  		lru += LRU_ACTIVE;
fa9add641b1b1c Hugh Dickins       2012-05-29  384  		add_page_to_lru_list(page, lruvec, lru);
24b7e5819ad5cb Mel Gorman         2014-08-06  385  		trace_mm_lru_activate(page);
744ed144275776 Shaohua Li         2011-01-13  386  
fa9add641b1b1c Hugh Dickins       2012-05-29  387  		__count_vm_event(PGACTIVATE);
fa9add641b1b1c Hugh Dickins       2012-05-29  388  		update_page_reclaim_stat(lruvec, file, 1);
744ed144275776 Shaohua Li         2011-01-13  389  	}
eb709b0d062efd Shaohua Li         2011-05-24  390  }
eb709b0d062efd Shaohua Li         2011-05-24  391  
eb709b0d062efd Shaohua Li         2011-05-24  392  #ifdef CONFIG_SMP
eb709b0d062efd Shaohua Li         2011-05-24  393  static void activate_page_drain(int cpu)
eb709b0d062efd Shaohua Li         2011-05-24  394  {
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  395  	struct swap_pagevec *swpvec = lock_swap_pvec_cpu(&activate_page_pvecs, cpu);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  396  	struct pagevec *pvec = &swpvec->pvec;
eb709b0d062efd Shaohua Li         2011-05-24  397  
eb709b0d062efd Shaohua Li         2011-05-24  398  	if (pagevec_count(pvec))
eb709b0d062efd Shaohua Li         2011-05-24  399  		pagevec_lru_move_fn(pvec, __activate_page, NULL);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  400  	unlock_swap_pvec_cpu(swpvec);
eb709b0d062efd Shaohua Li         2011-05-24  401  }
eb709b0d062efd Shaohua Li         2011-05-24  402  
5fbc461636c32e Chris Metcalf      2013-09-12  403  static bool need_activate_page_drain(int cpu)
5fbc461636c32e Chris Metcalf      2013-09-12  404  {
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  405  	return pagevec_count(per_cpu_ptr(&activate_page_pvecs.pvec, cpu)) != 0;
5fbc461636c32e Chris Metcalf      2013-09-12  406  }
5fbc461636c32e Chris Metcalf      2013-09-12  407  
eb709b0d062efd Shaohua Li         2011-05-24  408  void activate_page(struct page *page)
eb709b0d062efd Shaohua Li         2011-05-24  409  {
800d8c63b2e989 Kirill A. Shutemov 2016-07-26  410  	page = compound_head(page);
eb709b0d062efd Shaohua Li         2011-05-24  411  	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  412  		struct swap_pagevec *swpvec;
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  413  		struct pagevec *pvec;
eb709b0d062efd Shaohua Li         2011-05-24  414  
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  415  		get_page(page);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  416  		swpvec = lock_swap_pvec(&activate_page_pvecs);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  417  		pvec = &swpvec->pvec;
8f182270dfec43 Lukasz Odzioba     2016-06-24  418  		if (!pagevec_add(pvec, page) || PageCompound(page))
eb709b0d062efd Shaohua Li         2011-05-24  419  			pagevec_lru_move_fn(pvec, __activate_page, NULL);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  420  		unlock_swap_pvec(swpvec, &activate_page_pvecs);
eb709b0d062efd Shaohua Li         2011-05-24  421  	}
eb709b0d062efd Shaohua Li         2011-05-24  422  }
eb709b0d062efd Shaohua Li         2011-05-24  423  
eb709b0d062efd Shaohua Li         2011-05-24  424  #else
eb709b0d062efd Shaohua Li         2011-05-24  425  static inline void activate_page_drain(int cpu)
eb709b0d062efd Shaohua Li         2011-05-24  426  {
eb709b0d062efd Shaohua Li         2011-05-24  427  }
eb709b0d062efd Shaohua Li         2011-05-24  428  
eb709b0d062efd Shaohua Li         2011-05-24  429  void activate_page(struct page *page)
eb709b0d062efd Shaohua Li         2011-05-24  430  {
f4b7e272b5c042 Andrey Ryabinin    2019-03-05  431  	pg_data_t *pgdat = page_pgdat(page);
eb709b0d062efd Shaohua Li         2011-05-24  432  
800d8c63b2e989 Kirill A. Shutemov 2016-07-26  433  	page = compound_head(page);
f4b7e272b5c042 Andrey Ryabinin    2019-03-05  434  	spin_lock_irq(&pgdat->lru_lock);
f4b7e272b5c042 Andrey Ryabinin    2019-03-05  435  	__activate_page(page, mem_cgroup_page_lruvec(page, pgdat), NULL);
f4b7e272b5c042 Andrey Ryabinin    2019-03-05  436  	spin_unlock_irq(&pgdat->lru_lock);
^1da177e4c3f41 Linus Torvalds     2005-04-16  437  }
eb709b0d062efd Shaohua Li         2011-05-24  438  #endif
^1da177e4c3f41 Linus Torvalds     2005-04-16  439  
059285a25f30c1 Mel Gorman         2013-07-03 @440  static void __lru_cache_activate_page(struct page *page)
059285a25f30c1 Mel Gorman         2013-07-03  441  {
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  442  	struct swap_pagevec *swpvec = lock_swap_pvec(&lru_add_pvec);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  443  	struct pagevec *pvec = &swpvec->pvec;
059285a25f30c1 Mel Gorman         2013-07-03  444  	int i;
059285a25f30c1 Mel Gorman         2013-07-03  445  
059285a25f30c1 Mel Gorman         2013-07-03  446  	/*
059285a25f30c1 Mel Gorman         2013-07-03  447  	 * Search backwards on the optimistic assumption that the page being
059285a25f30c1 Mel Gorman         2013-07-03  448  	 * activated has just been added to this pagevec. Note that only
059285a25f30c1 Mel Gorman         2013-07-03  449  	 * the local pagevec is examined as a !PageLRU page could be in the
059285a25f30c1 Mel Gorman         2013-07-03  450  	 * process of being released, reclaimed, migrated or on a remote
059285a25f30c1 Mel Gorman         2013-07-03  451  	 * pagevec that is currently being drained. Furthermore, marking
059285a25f30c1 Mel Gorman         2013-07-03  452  	 * a remote pagevec's page PageActive potentially hits a race where
059285a25f30c1 Mel Gorman         2013-07-03  453  	 * a page is marked PageActive just after it is added to the inactive
059285a25f30c1 Mel Gorman         2013-07-03  454  	 * list causing accounting errors and BUG_ON checks to trigger.
059285a25f30c1 Mel Gorman         2013-07-03  455  	 */
059285a25f30c1 Mel Gorman         2013-07-03  456  	for (i = pagevec_count(pvec) - 1; i >= 0; i--) {
059285a25f30c1 Mel Gorman         2013-07-03  457  		struct page *pagevec_page = pvec->pages[i];
059285a25f30c1 Mel Gorman         2013-07-03  458  
059285a25f30c1 Mel Gorman         2013-07-03  459  		if (pagevec_page == page) {
059285a25f30c1 Mel Gorman         2013-07-03  460  			SetPageActive(page);
059285a25f30c1 Mel Gorman         2013-07-03  461  			break;
059285a25f30c1 Mel Gorman         2013-07-03  462  		}
059285a25f30c1 Mel Gorman         2013-07-03  463  	}
059285a25f30c1 Mel Gorman         2013-07-03  464  
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  465  	unlock_swap_pvec(swpvec, &lru_add_pvec);
059285a25f30c1 Mel Gorman         2013-07-03  466  }
059285a25f30c1 Mel Gorman         2013-07-03  467  
^1da177e4c3f41 Linus Torvalds     2005-04-16  468  /*
^1da177e4c3f41 Linus Torvalds     2005-04-16  469   * Mark a page as having seen activity.
^1da177e4c3f41 Linus Torvalds     2005-04-16  470   *
^1da177e4c3f41 Linus Torvalds     2005-04-16  471   * inactive,unreferenced	->	inactive,referenced
^1da177e4c3f41 Linus Torvalds     2005-04-16  472   * inactive,referenced		->	active,unreferenced
^1da177e4c3f41 Linus Torvalds     2005-04-16  473   * active,unreferenced		->	active,referenced
eb39d618f9e80f Hugh Dickins       2014-08-06  474   *
eb39d618f9e80f Hugh Dickins       2014-08-06  475   * When a newly allocated page is not yet visible, so safe for non-atomic ops,
eb39d618f9e80f Hugh Dickins       2014-08-06  476   * __SetPageReferenced(page) may be substituted for mark_page_accessed(page).
^1da177e4c3f41 Linus Torvalds     2005-04-16  477   */
920c7a5d0c94b8 Harvey Harrison    2008-02-04  478  void mark_page_accessed(struct page *page)
^1da177e4c3f41 Linus Torvalds     2005-04-16  479  {
e90309c9f7722d Kirill A. Shutemov 2016-01-15  480  	page = compound_head(page);
894bc310419ac9 Lee Schermerhorn   2008-10-18  481  	if (!PageActive(page) && !PageUnevictable(page) &&
059285a25f30c1 Mel Gorman         2013-07-03  482  			PageReferenced(page)) {
059285a25f30c1 Mel Gorman         2013-07-03  483  
059285a25f30c1 Mel Gorman         2013-07-03  484  		/*
059285a25f30c1 Mel Gorman         2013-07-03  485  		 * If the page is on the LRU, queue it for activation via
059285a25f30c1 Mel Gorman         2013-07-03  486  		 * activate_page_pvecs. Otherwise, assume the page is on a
059285a25f30c1 Mel Gorman         2013-07-03  487  		 * pagevec, mark it active and it'll be moved to the active
059285a25f30c1 Mel Gorman         2013-07-03  488  		 * LRU on the next drain.
059285a25f30c1 Mel Gorman         2013-07-03  489  		 */
059285a25f30c1 Mel Gorman         2013-07-03  490  		if (PageLRU(page))
^1da177e4c3f41 Linus Torvalds     2005-04-16  491  			activate_page(page);
059285a25f30c1 Mel Gorman         2013-07-03  492  		else
059285a25f30c1 Mel Gorman         2013-07-03  493  			__lru_cache_activate_page(page);
^1da177e4c3f41 Linus Torvalds     2005-04-16  494  		ClearPageReferenced(page);
a528910e12ec7e Johannes Weiner    2014-04-03  495  		if (page_is_file_cache(page))
a528910e12ec7e Johannes Weiner    2014-04-03  496  			workingset_activation(page);
^1da177e4c3f41 Linus Torvalds     2005-04-16  497  	} else if (!PageReferenced(page)) {
^1da177e4c3f41 Linus Torvalds     2005-04-16  498  		SetPageReferenced(page);
^1da177e4c3f41 Linus Torvalds     2005-04-16  499  	}
33c3fc71c8cfa3 Vladimir Davydov   2015-09-09  500  	if (page_is_idle(page))
33c3fc71c8cfa3 Vladimir Davydov   2015-09-09  501  		clear_page_idle(page);
^1da177e4c3f41 Linus Torvalds     2005-04-16  502  }
^1da177e4c3f41 Linus Torvalds     2005-04-16  503  EXPORT_SYMBOL(mark_page_accessed);
^1da177e4c3f41 Linus Torvalds     2005-04-16  504  
2329d3751b082b Jianyu Zhan        2014-06-04 @505  static void __lru_cache_add(struct page *page)
^1da177e4c3f41 Linus Torvalds     2005-04-16  506  {
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  507  	struct swap_pagevec *swpvec = lock_swap_pvec(&lru_add_pvec);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  508  	struct pagevec *pvec = &swpvec->pvec;
13f7f78981e49f Mel Gorman         2013-07-03  509  
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  510  	get_page(page);
8f182270dfec43 Lukasz Odzioba     2016-06-24  511  	if (!pagevec_add(pvec, page) || PageCompound(page))
a0b8cab3b9b2ef Mel Gorman         2013-07-03  512  		__pagevec_lru_add(pvec);
6ecb3de7d7b700 Thomas Gleixner    2019-04-18  513  	unlock_swap_pvec(swpvec, &lru_add_pvec);
^1da177e4c3f41 Linus Torvalds     2005-04-16  514  }
2329d3751b082b Jianyu Zhan        2014-06-04  515  

:::::: The code at line 348 was first introduced by commit
:::::: c55e8d035b28b2867e68b0e2d0eee2c0f1016b43 mm: vmscan: move dirty pages out of the way until they're flushed

:::::: TO: Johannes Weiner <hannes@cmpxchg.org>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org Intel Corporation

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [linux-rt-devel:linux-5.4.y-rt-rebase 113/253] mm/swap.c:348:9: sparse: sparse: context imbalance in 'rotate_reclaimable_page' - different lock contexts for basic block
  2020-01-28 10:26 [linux-rt-devel:linux-5.4.y-rt-rebase 113/253] mm/swap.c:348:9: sparse: sparse: context imbalance in 'rotate_reclaimable_page' - different lock contexts for basic block kbuild test robot
@ 2020-02-04 11:29 ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 2+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-04 11:29 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 1463 bytes --]

On 2020-01-28 18:26:57 [+0800], kbuild test robot wrote:
…
> sparse warnings: (new ones prefixed by >>)
> 
>    include/linux/spinlock.h:393:9: sparse: sparse: context imbalance in 'pagevec_lru_move_fn' - unexpected unlock

This looks okay and is the same as upstream. The unlock only happens if
`pgdat' if set and this happens only in the for-loop.

> >> mm/swap.c:348:9: sparse: sparse: context imbalance in 'rotate_reclaimable_page' - different lock contexts for basic block
> >> mm/swap.c:440:13: sparse: sparse: context imbalance in '__lru_cache_activate_page' - different lock contexts for basic block
> >> mm/swap.c:505:13: sparse: sparse: context imbalance in '__lru_cache_add' - different lock contexts for basic block
> >> mm/swap.c:695:6: sparse: sparse: context imbalance in 'lru_add_drain_cpu' - different lock contexts for basic block
> >> mm/swap.c:743:6: sparse: sparse: context imbalance in 'deactivate_file_page' - different lock contexts for basic block
> >> mm/swap.c:775:9: sparse: sparse: context imbalance in 'deactivate_page' - different lock contexts for basic block
> >> mm/swap.c:801:9: sparse: sparse: context imbalance in 'mark_page_lazyfree' - different lock contexts for basic block

those look okay, took. The lock part deferences a per-CPU variable and
the local pointer is returned. The unlock part uses the local pointer.
This and the `if' (locked vs preempt-disable mode) might confuse sparse.

Sebastian

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-02-04 11:29 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-28 10:26 [linux-rt-devel:linux-5.4.y-rt-rebase 113/253] mm/swap.c:348:9: sparse: sparse: context imbalance in 'rotate_reclaimable_page' - different lock contexts for basic block kbuild test robot
2020-02-04 11:29 ` Sebastian Andrzej Siewior

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.