From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 278C1C47080 for ; Tue, 1 Jun 2021 06:52:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BC2AA613A9 for ; Tue, 1 Jun 2021 06:52:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC2AA613A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BD30940008; Tue, 1 Jun 2021 02:52:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E21B940007; Tue, 1 Jun 2021 02:52:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F1D37940008; Tue, 1 Jun 2021 02:52:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id AECEC940007 for ; Tue, 1 Jun 2021 02:52:14 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 567B4A755 for ; Tue, 1 Jun 2021 06:52:14 +0000 (UTC) X-FDA: 78204235788.03.A892A43 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf20.hostedemail.com (Postfix) with ESMTP id 2160842F for ; Tue, 1 Jun 2021 06:51:57 +0000 (UTC) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 1516WfiD062862; Tue, 1 Jun 2021 02:52:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=NCXEv/aJngQFqigRMlTYy/jGouLUt4CcN6irCoZTFks=; b=Aoz0U80ox2mdfo9DTxGA/ZkEROXNX3udO2HLwzgKkmijcRmiDztqEBy6aR+l8AbzgCNB h1M/uxbkoU7/k5Vg4PTsUvv1ROpxvhxjgf8ID/peJbK6c5Kx6e3n/vyO9FlT+T5bUTnS qrx/3MY8DelAjHULzp0DmfaEi0DqWGrsQXQnD6FsY5q2a2+WGTI7N6as5Vnxpqebxxng T7wjtRCPOQ7bCaj03UzQZUu6DDH7ufZ4oDVBzn8Y3A2SFOGnewWydShCWuA7rkxQF+yK 2Y4SIBACvct8XrloyPhLOQTiUZDsHlE2xWH0GF56fA7u04HXz+cu0SDjs4YtWb6aBwJi 3w== Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 38wfebrvc7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 01 Jun 2021 02:52:10 -0400 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 1516lKVd002991; Tue, 1 Jun 2021 06:52:08 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma04fra.de.ibm.com with ESMTP id 38ud880syf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 01 Jun 2021 06:52:08 +0000 Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com [9.149.105.61]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 1516q5k429622596 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 1 Jun 2021 06:52:05 GMT Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 62A5311C058; Tue, 1 Jun 2021 06:52:05 +0000 (GMT) Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F020F11C04C; Tue, 1 Jun 2021 06:52:02 +0000 (GMT) Received: from bharata.ibmuc.com (unknown [9.77.195.136]) by d06av25.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 1 Jun 2021 06:52:02 +0000 (GMT) From: Bharata B Rao To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, aneesh.kumar@linux.ibm.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, akpm@linux-foundation.org, amakhalov@vmware.com, guro@fb.com, vbabka@suse.cz, srikar@linux.vnet.ibm.com, psampat@linux.ibm.com, ego@linux.vnet.ibm.com, Bharata B Rao Subject: [RFC PATCH v0 3/3] percpu: Avoid using percpu ptrs of non-existing cpus Date: Tue, 1 Jun 2021 12:21:47 +0530 Message-Id: <20210601065147.53735-4-bharata@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210601065147.53735-1-bharata@linux.ibm.com> References: <20210601065147.53735-1-bharata@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 9qBE326RlfhVU4tKbSkqGRoEShEeAk3v X-Proofpoint-GUID: 9qBE326RlfhVU4tKbSkqGRoEShEeAk3v X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.761 definitions=2021-06-01_03:2021-05-31,2021-06-01 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 malwarescore=0 adultscore=0 lowpriorityscore=0 priorityscore=1501 bulkscore=0 suspectscore=0 clxscore=1015 phishscore=0 spamscore=0 impostorscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106010045 X-Rspamd-Queue-Id: 2160842F Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=Aoz0U80o; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf20.hostedemail.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com X-Rspamd-Server: rspam03 X-Stat-Signature: x6zne4tn61suu8za9ipwckbsc55cyss8 X-HE-Tag: 1622530317-734002 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Prevent the callers of alloc_percpu() from using the percpu pointer of non-existing CPUs. Also switch those callers who require initialization of percpu data for onlined CPU to use the new variant alloc_percpu_cb() Note: Not all callers have been modified here Signed-off-by: Bharata B Rao --- fs/namespace.c | 4 ++-- kernel/cgroup/rstat.c | 20 ++++++++++++++++---- kernel/sched/cpuacct.c | 10 +++++----- kernel/sched/psi.c | 14 +++++++++++--- lib/percpu-refcount.c | 4 ++-- lib/percpu_counter.c | 2 +- net/ipv4/fib_semantics.c | 2 +- net/ipv6/route.c | 6 +++--- 8 files changed, 41 insertions(+), 21 deletions(-) diff --git a/fs/namespace.c b/fs/namespace.c index c3f1a78ba369..b6ea584b99e5 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -182,7 +182,7 @@ int mnt_get_count(struct mount *mnt) int count =3D 0; int cpu; =20 - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { count +=3D per_cpu_ptr(mnt->mnt_pcp, cpu)->mnt_count; } =20 @@ -294,7 +294,7 @@ static unsigned int mnt_get_writers(struct mount *mnt= ) unsigned int count =3D 0; int cpu; =20 - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { count +=3D per_cpu_ptr(mnt->mnt_pcp, cpu)->mnt_writers; } =20 diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index cee265cb535c..b25c59138c0b 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -152,7 +152,7 @@ static void cgroup_rstat_flush_locked(struct cgroup *= cgrp, bool may_sleep) =20 lockdep_assert_held(&cgroup_rstat_lock); =20 - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { raw_spinlock_t *cpu_lock =3D per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu); struct cgroup *pos =3D NULL; @@ -245,19 +245,31 @@ void cgroup_rstat_flush_release(void) spin_unlock_irq(&cgroup_rstat_lock); } =20 +static int cgroup_rstat_cpuhp_handler(void __percpu *ptr, unsigned int c= pu, void *data) +{ + struct cgroup *cgrp =3D (struct cgroup *)data; + struct cgroup_rstat_cpu *rstatc =3D per_cpu_ptr(ptr, cpu); + + rstatc->updated_children =3D cgrp; + u64_stats_init(&rstatc->bsync); + return 0; +} + int cgroup_rstat_init(struct cgroup *cgrp) { int cpu; =20 /* the root cgrp has rstat_cpu preallocated */ if (!cgrp->rstat_cpu) { - cgrp->rstat_cpu =3D alloc_percpu(struct cgroup_rstat_cpu); + cgrp->rstat_cpu =3D alloc_percpu_cb(struct cgroup_rstat_cpu, + cgroup_rstat_cpuhp_handler, + cgrp); if (!cgrp->rstat_cpu) return -ENOMEM; } =20 /* ->updated_children list is self terminated */ - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct cgroup_rstat_cpu *rstatc =3D cgroup_rstat_cpu(cgrp, cpu); =20 rstatc->updated_children =3D cgrp; @@ -274,7 +286,7 @@ void cgroup_rstat_exit(struct cgroup *cgrp) cgroup_rstat_flush(cgrp); =20 /* sanity check */ - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct cgroup_rstat_cpu *rstatc =3D cgroup_rstat_cpu(cgrp, cpu); =20 if (WARN_ON_ONCE(rstatc->updated_children !=3D cgrp) || diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c index 104a1bade14f..81dd53387ba5 100644 --- a/kernel/sched/cpuacct.c +++ b/kernel/sched/cpuacct.c @@ -160,7 +160,7 @@ static u64 __cpuusage_read(struct cgroup_subsys_state= *css, u64 totalcpuusage =3D 0; int i; =20 - for_each_possible_cpu(i) + for_each_online_cpu(i) totalcpuusage +=3D cpuacct_cpuusage_read(ca, i, index); =20 return totalcpuusage; @@ -195,7 +195,7 @@ static int cpuusage_write(struct cgroup_subsys_state = *css, struct cftype *cft, if (val) return -EINVAL; =20 - for_each_possible_cpu(cpu) + for_each_online_cpu(cpu) cpuacct_cpuusage_write(ca, cpu, 0); =20 return 0; @@ -208,7 +208,7 @@ static int __cpuacct_percpu_seq_show(struct seq_file = *m, u64 percpu; int i; =20 - for_each_possible_cpu(i) { + for_each_online_cpu(i) { percpu =3D cpuacct_cpuusage_read(ca, i, index); seq_printf(m, "%llu ", (unsigned long long) percpu); } @@ -242,7 +242,7 @@ static int cpuacct_all_seq_show(struct seq_file *m, v= oid *V) seq_printf(m, " %s", cpuacct_stat_desc[index]); seq_puts(m, "\n"); =20 - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct cpuacct_usage *cpuusage =3D per_cpu_ptr(ca->cpuusage, cpu); =20 seq_printf(m, "%d", cpu); @@ -275,7 +275,7 @@ static int cpuacct_stats_show(struct seq_file *sf, vo= id *v) int stat; =20 memset(val, 0, sizeof(val)); - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { u64 *cpustat =3D per_cpu_ptr(ca->cpustat, cpu)->cpustat; =20 val[CPUACCT_STAT_USER] +=3D cpustat[CPUTIME_USER]; diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index cc25a3cff41f..228977aa4780 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -186,7 +186,7 @@ static void group_init(struct psi_group *group) { int cpu; =20 - for_each_possible_cpu(cpu) + for_each_online_cpu(cpu) seqcount_init(&per_cpu_ptr(group->pcpu, cpu)->seq); group->avg_last_update =3D sched_clock(); group->avg_next_update =3D group->avg_last_update + psi_period; @@ -321,7 +321,7 @@ static void collect_percpu_times(struct psi_group *gr= oup, * the sampling period. This eliminates artifacts from uneven * loading, or even entirely idle CPUs. */ - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { u32 times[NR_PSI_STATES]; u32 nonidle; u32 cpu_changed_states; @@ -935,12 +935,20 @@ void psi_memstall_leave(unsigned long *flags) } =20 #ifdef CONFIG_CGROUPS +static int psi_cpuhp_handler(void __percpu *ptr, unsigned int cpu, void = *unused) +{ + struct psi_group_cpu *groupc =3D per_cpu_ptr(ptr, cpu); + + seqcount_init(&groupc->seq); + return 0; +} + int psi_cgroup_alloc(struct cgroup *cgroup) { if (static_branch_likely(&psi_disabled)) return 0; =20 - cgroup->psi.pcpu =3D alloc_percpu(struct psi_group_cpu); + cgroup->psi.pcpu =3D alloc_percpu_cb(struct psi_group_cpu, psi_cpuhp_ha= ndler, NULL); if (!cgroup->psi.pcpu) return -ENOMEM; group_init(&cgroup->psi); diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index a1071cdefb5a..aeba43c33600 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -173,7 +173,7 @@ static void percpu_ref_switch_to_atomic_rcu(struct rc= u_head *rcu) unsigned long count =3D 0; int cpu; =20 - for_each_possible_cpu(cpu) + for_each_online_cpu(cpu) count +=3D *per_cpu_ptr(percpu_count, cpu); =20 pr_debug("global %lu percpu %lu\n", @@ -253,7 +253,7 @@ static void __percpu_ref_switch_to_percpu(struct perc= pu_ref *ref) * zeroing is visible to all percpu accesses which can see the * following __PERCPU_REF_ATOMIC clearing. */ - for_each_possible_cpu(cpu) + for_each_online_cpu(cpu) *per_cpu_ptr(percpu_count, cpu) =3D 0; =20 smp_store_release(&ref->percpu_count_ptr, diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index ed610b75dc32..db40abc6f0f5 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -63,7 +63,7 @@ void percpu_counter_set(struct percpu_counter *fbc, s64= amount) unsigned long flags; =20 raw_spin_lock_irqsave(&fbc->lock, flags); - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { s32 *pcount =3D per_cpu_ptr(fbc->counters, cpu); *pcount =3D 0; } diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index a632b66bc13a..dbfd14b0077f 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -194,7 +194,7 @@ static void rt_fibinfo_free_cpus(struct rtable __rcu = * __percpu *rtp) if (!rtp) return; =20 - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct rtable *rt; =20 rt =3D rcu_dereference_protected(*per_cpu_ptr(rtp, cpu), 1); diff --git a/net/ipv6/route.c b/net/ipv6/route.c index a22822bdbf39..e7db3a5fe5c5 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -165,7 +165,7 @@ static void rt6_uncached_list_flush_dev(struct net *n= et, struct net_device *dev) if (dev =3D=3D loopback_dev) return; =20 - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct uncached_list *ul =3D per_cpu_ptr(&rt6_uncached_list, cpu); struct rt6_info *rt; =20 @@ -3542,7 +3542,7 @@ void fib6_nh_release(struct fib6_nh *fib6_nh) if (fib6_nh->rt6i_pcpu) { int cpu; =20 - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct rt6_info **ppcpu_rt; struct rt6_info *pcpu_rt; =20 @@ -6569,7 +6569,7 @@ int __init ip6_route_init(void) #endif #endif =20 - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct uncached_list *ul =3D per_cpu_ptr(&rt6_uncached_list, cpu); =20 INIT_LIST_HEAD(&ul->head); --=20 2.31.1