linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 06/16] mm/pagemap: Cleanup PREEMPT_COUNT leftovers
       [not found] <20201029165019.14218-1-urezki@gmail.com>
@ 2020-10-29 16:50 ` Uladzislau Rezki (Sony)
  2020-10-29 20:57   ` Uladzislau Rezki
  0 siblings, 1 reply; 3+ messages in thread
From: Uladzislau Rezki (Sony) @ 2020-10-29 16:50 UTC (permalink / raw)
  To: LKML, RCU, Paul E . McKenney
  Cc: Andrew Morton, Peter Zijlstra, Michal Hocko, Thomas Gleixner,
	Theodore Y . Ts'o, Joel Fernandes, Sebastian Andrzej Siewior,
	Uladzislau Rezki, Oleksiy Avramchenko, linux-mm

From: Thomas Gleixner <tglx@linutronix.de>

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 include/linux/pagemap.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index c77b7c31b2e4..cbfbe2bcca75 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -204,9 +204,7 @@ void release_pages(struct page **pages, int nr);
 static inline int __page_cache_add_speculative(struct page *page, int count)
 {
 #ifdef CONFIG_TINY_RCU
-# ifdef CONFIG_PREEMPT_COUNT
-	VM_BUG_ON(!in_atomic() && !irqs_disabled());
-# endif
+	VM_BUG_ON(preemptible())
 	/*
 	 * Preempt must be disabled here - we rely on rcu_read_lock doing
 	 * this for us.
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH 06/16] mm/pagemap: Cleanup PREEMPT_COUNT leftovers
  2020-10-29 16:50 ` [PATCH 06/16] mm/pagemap: Cleanup PREEMPT_COUNT leftovers Uladzislau Rezki (Sony)
@ 2020-10-29 20:57   ` Uladzislau Rezki
  2020-10-29 21:26     ` Paul E. McKenney
  0 siblings, 1 reply; 3+ messages in thread
From: Uladzislau Rezki @ 2020-10-29 20:57 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, RCU, Paul E . McKenney, Andrew Morton, Peter Zijlstra,
	Michal Hocko, Thomas Gleixner, Theodore Y . Ts'o,
	Joel Fernandes, Sebastian Andrzej Siewior, Oleksiy Avramchenko,
	linux-mm, urezki

On Thu, Oct 29, 2020 at 05:50:09PM +0100, Uladzislau Rezki (Sony) wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
> removed. Cleanup the leftovers before doing so.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
>  include/linux/pagemap.h | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index c77b7c31b2e4..cbfbe2bcca75 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -204,9 +204,7 @@ void release_pages(struct page **pages, int nr);
>  static inline int __page_cache_add_speculative(struct page *page, int count)
>  {
>  #ifdef CONFIG_TINY_RCU
> -# ifdef CONFIG_PREEMPT_COUNT
> -	VM_BUG_ON(!in_atomic() && !irqs_disabled());
> -# endif
> +	VM_BUG_ON(preemptible())
>  	/*
>  	 * Preempt must be disabled here - we rely on rcu_read_lock doing
>  	 * this for us.
> -- 
> 2.20.1
> 
Hello, Paul.

Sorry for a small mistake, it was fixed by you before, whereas i took an
old version of the patch that is question. Please use below one instead of
posted one:

Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Mon Sep 14 19:25:00 2020 +0200

    mm/pagemap: Cleanup PREEMPT_COUNT leftovers
    
    CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
    removed. Cleanup the leftovers before doing so.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: linux-mm@kvack.org
    [ paulmck: Fix !SMP build error per kernel test robot feedback. ]
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 7de11dcd534d..b3d9d9217ea0 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -168,9 +168,7 @@ void release_pages(struct page **pages, int nr);
 static inline int __page_cache_add_speculative(struct page *page, int count)
 {
 #ifdef CONFIG_TINY_RCU
-# ifdef CONFIG_PREEMPT_COUNT
-       VM_BUG_ON(!in_atomic() && !irqs_disabled());
-# endif
+       VM_BUG_ON(preemptible());
        /*
         * Preempt must be disabled here - we rely on rcu_read_lock doing
         * this for us.

--
Vlad Rezki


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH 06/16] mm/pagemap: Cleanup PREEMPT_COUNT leftovers
  2020-10-29 20:57   ` Uladzislau Rezki
@ 2020-10-29 21:26     ` Paul E. McKenney
  0 siblings, 0 replies; 3+ messages in thread
From: Paul E. McKenney @ 2020-10-29 21:26 UTC (permalink / raw)
  To: Uladzislau Rezki
  Cc: LKML, RCU, Andrew Morton, Peter Zijlstra, Michal Hocko,
	Thomas Gleixner, Theodore Y . Ts'o, Joel Fernandes,
	Sebastian Andrzej Siewior, Oleksiy Avramchenko, linux-mm

On Thu, Oct 29, 2020 at 09:57:17PM +0100, Uladzislau Rezki wrote:
> On Thu, Oct 29, 2020 at 05:50:09PM +0100, Uladzislau Rezki (Sony) wrote:
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
> > removed. Cleanup the leftovers before doing so.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: linux-mm@kvack.org
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > ---
> >  include/linux/pagemap.h | 4 +---
> >  1 file changed, 1 insertion(+), 3 deletions(-)
> > 
> > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> > index c77b7c31b2e4..cbfbe2bcca75 100644
> > --- a/include/linux/pagemap.h
> > +++ b/include/linux/pagemap.h
> > @@ -204,9 +204,7 @@ void release_pages(struct page **pages, int nr);
> >  static inline int __page_cache_add_speculative(struct page *page, int count)
> >  {
> >  #ifdef CONFIG_TINY_RCU
> > -# ifdef CONFIG_PREEMPT_COUNT
> > -	VM_BUG_ON(!in_atomic() && !irqs_disabled());
> > -# endif
> > +	VM_BUG_ON(preemptible())
> >  	/*
> >  	 * Preempt must be disabled here - we rely on rcu_read_lock doing
> >  	 * this for us.
> > -- 
> > 2.20.1
> > 
> Hello, Paul.
> 
> Sorry for a small mistake, it was fixed by you before, whereas i took an
> old version of the patch that is question. Please use below one instead of
> posted one:

We have all been there and done that!  ;-)

I will give this update a spin and see what happens.

							Thanx, Paul

> Author: Thomas Gleixner <tglx@linutronix.de>
> Date:   Mon Sep 14 19:25:00 2020 +0200
> 
>     mm/pagemap: Cleanup PREEMPT_COUNT leftovers
>     
>     CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
>     removed. Cleanup the leftovers before doing so.
>     
>     Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
>     Cc: Andrew Morton <akpm@linux-foundation.org>
>     Cc: linux-mm@kvack.org
>     [ paulmck: Fix !SMP build error per kernel test robot feedback. ]
>     Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> 
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 7de11dcd534d..b3d9d9217ea0 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -168,9 +168,7 @@ void release_pages(struct page **pages, int nr);
>  static inline int __page_cache_add_speculative(struct page *page, int count)
>  {
>  #ifdef CONFIG_TINY_RCU
> -# ifdef CONFIG_PREEMPT_COUNT
> -       VM_BUG_ON(!in_atomic() && !irqs_disabled());
> -# endif
> +       VM_BUG_ON(preemptible());
>         /*
>          * Preempt must be disabled here - we rely on rcu_read_lock doing
>          * this for us.
> 
> --
> Vlad Rezki


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-10-29 21:27 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20201029165019.14218-1-urezki@gmail.com>
2020-10-29 16:50 ` [PATCH 06/16] mm/pagemap: Cleanup PREEMPT_COUNT leftovers Uladzislau Rezki (Sony)
2020-10-29 20:57   ` Uladzislau Rezki
2020-10-29 21:26     ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).