All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Qian Cai <cai@lca.pw>
Cc: hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm/memcontrol: fix a data race in scan count
Date: Sun, 9 Feb 2020 20:28:40 -0800	[thread overview]
Message-ID: <20200209202840.2bf97ffcfa811550d733c461@linux-foundation.org> (raw)
In-Reply-To: <20200206034945.2481-1-cai@lca.pw>

On Wed,  5 Feb 2020 22:49:45 -0500 Qian Cai <cai@lca.pw> wrote:

> struct mem_cgroup_per_node mz.lru_zone_size[zone_idx][lru] could be
> accessed concurrently as noticed by KCSAN,
> 
> ...
>
>  Reported by Kernel Concurrency Sanitizer on:
>  CPU: 95 PID: 50964 Comm: cc1 Tainted: G        W  O L    5.5.0-next-20200204+ #6
>  Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019
> 
> The write is under lru_lock, but the read is done as lockless. The scan
> count is used to determine how aggressively the anon and file LRU lists
> should be scanned. Load tearing could generate an inefficient heuristic,
> so fix it by adding READ_ONCE() for the read.
> 
> ...
>
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -533,7 +533,7 @@ unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
>  	struct mem_cgroup_per_node *mz;
>  
>  	mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
> -	return mz->lru_zone_size[zone_idx][lru];
> +	return READ_ONCE(mz->lru_zone_size[zone_idx][lru]);
>  }

I worry about the readability/maintainability of these things.  A naive
reader who comes upon this code will wonder "why the heck is it using
READ_ONCE?".  A possibly lengthy trawl through the git history will
reveal the reason but that's rather unkind.  Wouldn't a simple

	/* modified under lru_lock, so use READ_ONCE */

improve the situation?



WARNING: multiple messages have this Message-ID (diff)
From: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
To: Qian Cai <cai-J5quhbR+WMc@public.gmane.org>
Cc: hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [PATCH] mm/memcontrol: fix a data race in scan count
Date: Sun, 9 Feb 2020 20:28:40 -0800	[thread overview]
Message-ID: <20200209202840.2bf97ffcfa811550d733c461@linux-foundation.org> (raw)
In-Reply-To: <20200206034945.2481-1-cai-J5quhbR+WMc@public.gmane.org>

On Wed,  5 Feb 2020 22:49:45 -0500 Qian Cai <cai-J5quhbR+WMc@public.gmane.org> wrote:

> struct mem_cgroup_per_node mz.lru_zone_size[zone_idx][lru] could be
> accessed concurrently as noticed by KCSAN,
> 
> ...
>
>  Reported by Kernel Concurrency Sanitizer on:
>  CPU: 95 PID: 50964 Comm: cc1 Tainted: G        W  O L    5.5.0-next-20200204+ #6
>  Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019
> 
> The write is under lru_lock, but the read is done as lockless. The scan
> count is used to determine how aggressively the anon and file LRU lists
> should be scanned. Load tearing could generate an inefficient heuristic,
> so fix it by adding READ_ONCE() for the read.
> 
> ...
>
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -533,7 +533,7 @@ unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
>  	struct mem_cgroup_per_node *mz;
>  
>  	mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
> -	return mz->lru_zone_size[zone_idx][lru];
> +	return READ_ONCE(mz->lru_zone_size[zone_idx][lru]);
>  }

I worry about the readability/maintainability of these things.  A naive
reader who comes upon this code will wonder "why the heck is it using
READ_ONCE?".  A possibly lengthy trawl through the git history will
reveal the reason but that's rather unkind.  Wouldn't a simple

	/* modified under lru_lock, so use READ_ONCE */

improve the situation?



  reply	other threads:[~2020-02-10  4:28 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-06  3:49 [PATCH] mm/memcontrol: fix a data race in scan count Qian Cai
2020-02-06  3:49 ` Qian Cai
2020-02-10  4:28 ` Andrew Morton [this message]
2020-02-10  4:28   ` Andrew Morton
2020-02-10  4:44   ` Qian Cai
2020-02-10  4:44     ` Qian Cai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200209202840.2bf97ffcfa811550d733c461@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=cai@lca.pw \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.