All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] memcg: Optimise relock_page_lruvec functions
@ 2021-10-19 12:50 Matthew Wilcox (Oracle)
  2021-10-25 10:44 ` Vlastimil Babka
  0 siblings, 1 reply; 3+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-10-19 12:50 UTC (permalink / raw)
  To: linux-mm
  Cc: Matthew Wilcox (Oracle),
	Alexander Duyck, Alex Shi, Hugh Dickins, Johannes Weiner,
	Vlastimil Babka

Leave interrupts disabled when we change which lru lock is held.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/memcontrol.h | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 3096c9a0ee01..a6a90b00a22b 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1524,16 +1524,22 @@ static inline bool page_matches_lruvec(struct page *page, struct lruvec *lruvec)
 }
 
 /* Don't lock again iff page's lruvec locked */
-static inline struct lruvec *relock_page_lruvec_irq(struct page *page,
+static inline struct lruvec *relock_page_lruvec(struct page *page,
 		struct lruvec *locked_lruvec)
 {
-	if (locked_lruvec) {
-		if (page_matches_lruvec(page, locked_lruvec))
-			return locked_lruvec;
+	if (page_matches_lruvec(page, locked_lruvec))
+		return locked_lruvec;
 
-		unlock_page_lruvec_irq(locked_lruvec);
-	}
+	unlock_page_lruvec(locked_lruvec);
+	return lock_page_lruvec(page);
+}
 
+/* Don't lock again iff page's lruvec locked */
+static inline struct lruvec *relock_page_lruvec_irq(struct page *page,
+		struct lruvec *locked_lruvec)
+{
+	if (locked_lruvec)
+		return relock_page_lruvec(page, locked_lruvec);
 	return lock_page_lruvec_irq(page);
 }
 
@@ -1541,13 +1547,8 @@ static inline struct lruvec *relock_page_lruvec_irq(struct page *page,
 static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page,
 		struct lruvec *locked_lruvec, unsigned long *flags)
 {
-	if (locked_lruvec) {
-		if (page_matches_lruvec(page, locked_lruvec))
-			return locked_lruvec;
-
-		unlock_page_lruvec_irqrestore(locked_lruvec, *flags);
-	}
-
+	if (locked_lruvec)
+		return relock_page_lruvec(page, locked_lruvec);
 	return lock_page_lruvec_irqsave(page, flags);
 }
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] memcg: Optimise relock_page_lruvec functions
  2021-10-19 12:50 [PATCH] memcg: Optimise relock_page_lruvec functions Matthew Wilcox (Oracle)
@ 2021-10-25 10:44 ` Vlastimil Babka
  2021-10-25 11:16   ` Peter Zijlstra
  0 siblings, 1 reply; 3+ messages in thread
From: Vlastimil Babka @ 2021-10-25 10:44 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), linux-mm
  Cc: Alexander Duyck, Alex Shi, Hugh Dickins, Johannes Weiner,
	Thomas Gleixner, Peter Zijlstra

On 10/19/21 14:50, Matthew Wilcox (Oracle) wrote:
> Leave interrupts disabled when we change which lru lock is held.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Assuming lockdep is fine with e.g.:

spin_lock_irqsave(A);
spin_unlock(A);
spin_lock(B);
spin_unlock_irqrestore(B);

(with A and B of same class).

> ---
>  include/linux/memcontrol.h | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 3096c9a0ee01..a6a90b00a22b 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1524,16 +1524,22 @@ static inline bool page_matches_lruvec(struct page *page, struct lruvec *lruvec)
>  }
>  
>  /* Don't lock again iff page's lruvec locked */
> -static inline struct lruvec *relock_page_lruvec_irq(struct page *page,
> +static inline struct lruvec *relock_page_lruvec(struct page *page,
>  		struct lruvec *locked_lruvec)
>  {
> -	if (locked_lruvec) {
> -		if (page_matches_lruvec(page, locked_lruvec))
> -			return locked_lruvec;
> +	if (page_matches_lruvec(page, locked_lruvec))
> +		return locked_lruvec;
>  
> -		unlock_page_lruvec_irq(locked_lruvec);
> -	}
> +	unlock_page_lruvec(locked_lruvec);
> +	return lock_page_lruvec(page);
> +}
>  
> +/* Don't lock again iff page's lruvec locked */
> +static inline struct lruvec *relock_page_lruvec_irq(struct page *page,
> +		struct lruvec *locked_lruvec)
> +{
> +	if (locked_lruvec)
> +		return relock_page_lruvec(page, locked_lruvec);
>  	return lock_page_lruvec_irq(page);
>  }
>  
> @@ -1541,13 +1547,8 @@ static inline struct lruvec *relock_page_lruvec_irq(struct page *page,
>  static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page,
>  		struct lruvec *locked_lruvec, unsigned long *flags)
>  {
> -	if (locked_lruvec) {
> -		if (page_matches_lruvec(page, locked_lruvec))
> -			return locked_lruvec;
> -
> -		unlock_page_lruvec_irqrestore(locked_lruvec, *flags);
> -	}
> -
> +	if (locked_lruvec)
> +		return relock_page_lruvec(page, locked_lruvec);
>  	return lock_page_lruvec_irqsave(page, flags);
>  }
>  
> 



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] memcg: Optimise relock_page_lruvec functions
  2021-10-25 10:44 ` Vlastimil Babka
@ 2021-10-25 11:16   ` Peter Zijlstra
  0 siblings, 0 replies; 3+ messages in thread
From: Peter Zijlstra @ 2021-10-25 11:16 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle),
	linux-mm, Alexander Duyck, Alex Shi, Hugh Dickins,
	Johannes Weiner, Thomas Gleixner

On Mon, Oct 25, 2021 at 12:44:05PM +0200, Vlastimil Babka wrote:
> On 10/19/21 14:50, Matthew Wilcox (Oracle) wrote:
> > Leave interrupts disabled when we change which lru lock is held.
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> 
> Assuming lockdep is fine with e.g.:
> 
> spin_lock_irqsave(A);
> spin_unlock(A);
> spin_lock(B);
> spin_unlock_irqrestore(B);
> 
> (with A and B of same class).

It's unconditionally okay with that pattern. As far as lockdep
is concerned there really is no differences vs:

	local_irq_save()
	spin_lock(a)
	spin_unlock(a)
	spin_lock(b)
	spin_unlock(b)
	local_irq_restore()

It's the RT locking primitives that care about the difference :-)


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-10-25 11:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-19 12:50 [PATCH] memcg: Optimise relock_page_lruvec functions Matthew Wilcox (Oracle)
2021-10-25 10:44 ` Vlastimil Babka
2021-10-25 11:16   ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.