All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] mm/list_lru: Add list_lru_shrink_walk_irq() and a user
@ 2018-07-16 11:19 Sebastian Andrzej Siewior
  2018-07-16 11:19 ` [PATCH 1/4] mm/list_lru: use list_lru_walk_one() in list_lru_walk_node() Sebastian Andrzej Siewior
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-07-16 11:19 UTC (permalink / raw)
  To: linux-mm; +Cc: tglx, Vladimir Davydov, Andrew Morton

This series removes the local_irq_disable() around
list_lru_shrink_walk() (as used by mm/workingset) by adding
list_lru_shrink_walk_irq(). Vladimir Davydov preferred this over an `irq'
argument which I added to struct list_lru.

The initial post (of this series) received a Reviewed-by tag by Vladimir
Davydov which I added to each patch of the series.
The series applies on top of akpm's tree which has Kirill's shrink_slab
series and does not clash with it (akpm asked me to wait a week or so
and repost it then).

I tested the code paths by triggering the OOM-killer via memory over
commit and lockdep did not complain (nor did I see any warnings).

Sebastian

Sebastian

^ permalink raw reply	[flat|nested] 6+ messages in thread
* Re: [PATCH 3/3] mm: list_lru: Add lock_irq member to __list_lru_init()
@ 2018-06-24 20:09 Vladimir Davydov
  2018-07-03 14:52 ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 6+ messages in thread
From: Vladimir Davydov @ 2018-06-24 20:09 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior; +Cc: linux-mm, tglx, Andrew Morton

On Fri, Jun 22, 2018 at 05:12:21PM +0200, Sebastian Andrzej Siewior wrote:
> scan_shadow_nodes() is the only user of __list_lru_walk_one() which
> disables interrupts before invoking it. The reason is that nlru->lock is
> nesting inside IRQ-safe i_pages lock. Some functions unconditionally
> acquire the lock with the _irq() suffix.
> 
> __list_lru_walk_one() can't acquire the lock unconditionally with _irq()
> suffix because it might invoke a callback which unlocks the nlru->lock
> and invokes a sleeping function without enabling interrupts.
> 
> Add an argument to __list_lru_init() which identifies wheather the
> nlru->lock needs to be acquired with disabling interrupts or without.
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  include/linux/list_lru.h | 12 ++++++++----
>  mm/list_lru.c            | 14 ++++++++++----
>  mm/workingset.c          | 12 ++++--------
>  3 files changed, 22 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index 96def9d15b1b..c2161c3a1809 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -51,18 +51,22 @@ struct list_lru_node {
>  
>  struct list_lru {
>  	struct list_lru_node	*node;
> +	bool			lock_irq;
>  #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
>  	struct list_head	list;
>  #endif
>  };

TBO I don't like this patch, because the new member of struct list_lru,
lock_irq, has rather obscure meaning IMHO: it makes list_lru_walk
disable irq before taking lru_lock, but at the same time list_lru_add
and list_lru_del never do that, no matter whether lock_irq is true or
false. That is, if a user of struct list_lru sets this flag, he's
supposed to disable irq for list_lru_add/del by himself (mm/workingset
does that). IMHO the code of mm/workingset is clear as it is. Since it
is the only place where this flag is used, I'd rather leave it as is.

>  
>  void list_lru_destroy(struct list_lru *lru);
> -int __list_lru_init(struct list_lru *lru, bool memcg_aware,
> +int __list_lru_init(struct list_lru *lru, bool memcg_aware, bool lock_irq,
>  		    struct lock_class_key *key);
>  
> -#define list_lru_init(lru)		__list_lru_init((lru), false, NULL)
> -#define list_lru_init_key(lru, key)	__list_lru_init((lru), false, (key))
> -#define list_lru_init_memcg(lru)	__list_lru_init((lru), true, NULL)
> +#define list_lru_init(lru)		__list_lru_init((lru), false, false, \
> +							NULL)
> +#define list_lru_init_key(lru, key)	__list_lru_init((lru), false, false, \
> +							(key))
> +#define list_lru_init_memcg(lru)	__list_lru_init((lru), true, false, \
> +							NULL)
>  
>  int memcg_update_all_list_lrus(int num_memcgs);
>  void memcg_drain_all_list_lrus(int src_idx, int dst_idx);
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index fcfb6c89ed47..1c49d48078e4 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -204,7 +204,10 @@ __list_lru_walk_one(struct list_lru *lru, int nid, int memcg_idx,
>  	struct list_head *item, *n;
>  	unsigned long isolated = 0;
>  
> -	spin_lock(&nlru->lock);
> +	if (lru->lock_irq)
> +		spin_lock_irq(&nlru->lock);
> +	else
> +		spin_lock(&nlru->lock);
>  	l = list_lru_from_memcg_idx(nlru, memcg_idx);
>  restart:
>  	list_for_each_safe(item, n, &l->list) {
> @@ -251,7 +254,10 @@ __list_lru_walk_one(struct list_lru *lru, int nid, int memcg_idx,
>  		}
>  	}
>  
> -	spin_unlock(&nlru->lock);
> +	if (lru->lock_irq)
> +		spin_unlock_irq(&nlru->lock);
> +	else
> +		spin_unlock(&nlru->lock);
>  	return isolated;
>  }
>  
> @@ -553,7 +559,7 @@ static void memcg_destroy_list_lru(struct list_lru *lru)
>  }
>  #endif /* CONFIG_MEMCG && !CONFIG_SLOB */
>  
> -int __list_lru_init(struct list_lru *lru, bool memcg_aware,
> +int __list_lru_init(struct list_lru *lru, bool memcg_aware, bool lock_irq,
>  		    struct lock_class_key *key)
>  {
>  	int i;
> @@ -580,7 +586,7 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware,
>  		lru->node = NULL;
>  		goto out;
>  	}
> -
> +	lru->lock_irq = lock_irq;
>  	list_lru_register(lru);
>  out:
>  	memcg_put_cache_ids();
> diff --git a/mm/workingset.c b/mm/workingset.c
> index 529480c21f93..23ce00f48212 100644
> --- a/mm/workingset.c
> +++ b/mm/workingset.c
> @@ -480,13 +480,8 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
>  static unsigned long scan_shadow_nodes(struct shrinker *shrinker,
>  				       struct shrink_control *sc)
>  {
> -	unsigned long ret;
> -
> -	/* list_lru lock nests inside the IRQ-safe i_pages lock */
> -	local_irq_disable();
> -	ret = list_lru_shrink_walk(&shadow_nodes, sc, shadow_lru_isolate, NULL);
> -	local_irq_enable();
> -	return ret;
> +	return list_lru_shrink_walk(&shadow_nodes, sc, shadow_lru_isolate,
> +				    NULL);
>  }
>  
>  static struct shrinker workingset_shadow_shrinker = {
> @@ -523,7 +518,8 @@ static int __init workingset_init(void)
>  	pr_info("workingset: timestamp_bits=%d max_order=%d bucket_order=%u\n",
>  	       timestamp_bits, max_order, bucket_order);
>  
> -	ret = __list_lru_init(&shadow_nodes, true, &shadow_nodes_key);
> +	/* list_lru lock nests inside the IRQ-safe i_pages lock */
> +	ret = __list_lru_init(&shadow_nodes, true, true, &shadow_nodes_key);
>  	if (ret)
>  		goto err;
>  	ret = register_shrinker(&workingset_shadow_shrinker);

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-07-16 11:19 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-16 11:19 [PATCH 0/4] mm/list_lru: Add list_lru_shrink_walk_irq() and a user Sebastian Andrzej Siewior
2018-07-16 11:19 ` [PATCH 1/4] mm/list_lru: use list_lru_walk_one() in list_lru_walk_node() Sebastian Andrzej Siewior
2018-07-16 11:19 ` [PATCH 2/4] mm/list_lru: Move locking from __list_lru_walk_one() to its caller Sebastian Andrzej Siewior
2018-07-16 11:19 ` [PATCH 3/4] mm/list_lru: Pass struct list_lru_node as an argument __list_lru_walk_one() Sebastian Andrzej Siewior
2018-07-16 11:19 ` [PATCH 4/4] mm/list_lru: Introduce list_lru_shrink_walk_irq() Sebastian Andrzej Siewior
  -- strict thread matches above, loose matches on Subject: below --
2018-06-24 20:09 [PATCH 3/3] mm: list_lru: Add lock_irq member to __list_lru_init() Vladimir Davydov
2018-07-03 14:52 ` Sebastian Andrzej Siewior
2018-07-03 14:52   ` [PATCH 4/4] mm/list_lru: Introduce list_lru_shrink_walk_irq() Sebastian Andrzej Siewior

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.