[06/16] mm/pagemap: Cleanup PREEMPT_COUNT leftovers
diff mbox series

Message ID 20201029165019.14218-6-urezki@gmail.com
State New
Headers show
Series
  • [01/16] rcu/tree: Add a work to allocate pages from regular context
Related show

Commit Message

Uladzislau Rezki Oct. 29, 2020, 4:50 p.m. UTC
From: Thomas Gleixner <tglx@linutronix.de>

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 include/linux/pagemap.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

Comments

Uladzislau Rezki Oct. 29, 2020, 8:57 p.m. UTC | #1
On Thu, Oct 29, 2020 at 05:50:09PM +0100, Uladzislau Rezki (Sony) wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
> removed. Cleanup the leftovers before doing so.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
>  include/linux/pagemap.h | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index c77b7c31b2e4..cbfbe2bcca75 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -204,9 +204,7 @@ void release_pages(struct page **pages, int nr);
>  static inline int __page_cache_add_speculative(struct page *page, int count)
>  {
>  #ifdef CONFIG_TINY_RCU
> -# ifdef CONFIG_PREEMPT_COUNT
> -	VM_BUG_ON(!in_atomic() && !irqs_disabled());
> -# endif
> +	VM_BUG_ON(preemptible())
>  	/*
>  	 * Preempt must be disabled here - we rely on rcu_read_lock doing
>  	 * this for us.
> -- 
> 2.20.1
> 
Hello, Paul.

Sorry for a small mistake, it was fixed by you before, whereas i took an
old version of the patch that is question. Please use below one instead of
posted one:

Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Mon Sep 14 19:25:00 2020 +0200

    mm/pagemap: Cleanup PREEMPT_COUNT leftovers
    
    CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
    removed. Cleanup the leftovers before doing so.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: linux-mm@kvack.org
    [ paulmck: Fix !SMP build error per kernel test robot feedback. ]
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 7de11dcd534d..b3d9d9217ea0 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -168,9 +168,7 @@ void release_pages(struct page **pages, int nr);
 static inline int __page_cache_add_speculative(struct page *page, int count)
 {
 #ifdef CONFIG_TINY_RCU
-# ifdef CONFIG_PREEMPT_COUNT
-       VM_BUG_ON(!in_atomic() && !irqs_disabled());
-# endif
+       VM_BUG_ON(preemptible());
        /*
         * Preempt must be disabled here - we rely on rcu_read_lock doing
         * this for us.

--
Vlad Rezki
Paul E. McKenney Oct. 29, 2020, 9:26 p.m. UTC | #2
On Thu, Oct 29, 2020 at 09:57:17PM +0100, Uladzislau Rezki wrote:
> On Thu, Oct 29, 2020 at 05:50:09PM +0100, Uladzislau Rezki (Sony) wrote:
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
> > removed. Cleanup the leftovers before doing so.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: linux-mm@kvack.org
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > ---
> >  include/linux/pagemap.h | 4 +---
> >  1 file changed, 1 insertion(+), 3 deletions(-)
> > 
> > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> > index c77b7c31b2e4..cbfbe2bcca75 100644
> > --- a/include/linux/pagemap.h
> > +++ b/include/linux/pagemap.h
> > @@ -204,9 +204,7 @@ void release_pages(struct page **pages, int nr);
> >  static inline int __page_cache_add_speculative(struct page *page, int count)
> >  {
> >  #ifdef CONFIG_TINY_RCU
> > -# ifdef CONFIG_PREEMPT_COUNT
> > -	VM_BUG_ON(!in_atomic() && !irqs_disabled());
> > -# endif
> > +	VM_BUG_ON(preemptible())
> >  	/*
> >  	 * Preempt must be disabled here - we rely on rcu_read_lock doing
> >  	 * this for us.
> > -- 
> > 2.20.1
> > 
> Hello, Paul.
> 
> Sorry for a small mistake, it was fixed by you before, whereas i took an
> old version of the patch that is question. Please use below one instead of
> posted one:

We have all been there and done that!  ;-)

I will give this update a spin and see what happens.

							Thanx, Paul

> Author: Thomas Gleixner <tglx@linutronix.de>
> Date:   Mon Sep 14 19:25:00 2020 +0200
> 
>     mm/pagemap: Cleanup PREEMPT_COUNT leftovers
>     
>     CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
>     removed. Cleanup the leftovers before doing so.
>     
>     Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
>     Cc: Andrew Morton <akpm@linux-foundation.org>
>     Cc: linux-mm@kvack.org
>     [ paulmck: Fix !SMP build error per kernel test robot feedback. ]
>     Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> 
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 7de11dcd534d..b3d9d9217ea0 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -168,9 +168,7 @@ void release_pages(struct page **pages, int nr);
>  static inline int __page_cache_add_speculative(struct page *page, int count)
>  {
>  #ifdef CONFIG_TINY_RCU
> -# ifdef CONFIG_PREEMPT_COUNT
> -       VM_BUG_ON(!in_atomic() && !irqs_disabled());
> -# endif
> +       VM_BUG_ON(preemptible());
>         /*
>          * Preempt must be disabled here - we rely on rcu_read_lock doing
>          * this for us.
> 
> --
> Vlad Rezki

Patch
diff mbox series

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index c77b7c31b2e4..cbfbe2bcca75 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -204,9 +204,7 @@  void release_pages(struct page **pages, int nr);
 static inline int __page_cache_add_speculative(struct page *page, int count)
 {
 #ifdef CONFIG_TINY_RCU
-# ifdef CONFIG_PREEMPT_COUNT
-	VM_BUG_ON(!in_atomic() && !irqs_disabled());
-# endif
+	VM_BUG_ON(preemptible())
 	/*
 	 * Preempt must be disabled here - we rely on rcu_read_lock doing
 	 * this for us.