From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 905FBC5517A for ; Fri, 30 Oct 2020 19:02:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0CC9A2072C for ; Fri, 30 Oct 2020 19:02:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0CC9A2072C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1D1C26B0070; Fri, 30 Oct 2020 15:02:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 139F86B0071; Fri, 30 Oct 2020 15:02:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D21DD6B0073; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 833E56B0070 for ; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 13D53824999B for ; Fri, 30 Oct 2020 19:02:50 +0000 (UTC) X-FDA: 77429513700.28.rod78_470d52c27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id D636B6D72 for ; Fri, 30 Oct 2020 19:02:49 +0000 (UTC) X-HE-Tag: rod78_470d52c27298 X-Filterd-Recvd-Size: 15021 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:48 +0000 (UTC) IronPort-SDR: mgooD/TFmnqyczUUlcTFyZ/vlLmZ9UCan6PrA23n2XNmx+PfXu7IyBQX1oVseKL0Mu+Er7ouZ3 XCfuhTdfmWMw== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629107" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629107" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:47 -0700 IronPort-SDR: 7ado/V/MftgPZt36C4q29kh3mPCOcl6vDSJ88+BpJB93JCfMaEwvz2q3GX1aach8CqEbHJXq8f Vlwxwg9VlCbQ== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167680" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:46 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , linux-kernel@vger.kernel.org Subject: [PATCH 06/12] mm/mempolicy: kill v.preferred_nodes Date: Fri, 30 Oct 2020 12:02:32 -0700 Message-Id: <20201030190238.306764-7-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that preferred_nodes is just a mask, and policies are mutually exclusive, there is no reason to have a separate mask. This patch is optional. It definitely helps clean up code in future patches, but there is no functional difference to leaving it with the previous name. I do believe it helps demonstrate the exclusivity of the fields. Link: https://lore.kernel.org/r/20200630212517.308045-7-ben.widawsky@inte= l.com Signed-off-by: Ben Widawsky --- include/linux/mempolicy.h | 6 +- mm/mempolicy.c | 112 ++++++++++++++++++-------------------- 2 files changed, 55 insertions(+), 63 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 23ee10556b82..ec811c35513e 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -46,11 +46,7 @@ struct mempolicy { atomic_t refcnt; unsigned short mode; /* See MPOL_* above */ unsigned short flags; /* See set_mempolicy() MPOL_F_* above */ - union { - nodemask_t preferred_nodes; /* preferred */ - nodemask_t nodes; /* interleave/bind */ - /* undefined for default */ - } v; + nodemask_t nodes; /* interleave/bind/many */ union { nodemask_t cpuset_mems_allowed; /* relative to these nodes */ nodemask_t user_nodemask; /* nodemask passed by user */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 1b88c133f5c5..f15dae340333 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -199,7 +199,7 @@ static int mpol_new_interleave(struct mempolicy *pol,= const nodemask_t *nodes) { if (nodes_empty(*nodes)) return -EINVAL; - pol->v.nodes =3D *nodes; + pol->nodes =3D *nodes; return 0; } =20 @@ -211,7 +211,7 @@ static int mpol_new_preferred_many(struct mempolicy *= pol, else if (nodes_empty(*nodes)) return -EINVAL; /* no allowed nodes */ else - pol->v.preferred_nodes =3D *nodes; + pol->nodes =3D *nodes; return 0; } =20 @@ -231,7 +231,7 @@ static int mpol_new_bind(struct mempolicy *pol, const= nodemask_t *nodes) { if (nodes_empty(*nodes)) return -EINVAL; - pol->v.nodes =3D *nodes; + pol->nodes =3D *nodes; return 0; } =20 @@ -348,15 +348,15 @@ static void mpol_rebind_nodemask(struct mempolicy *= pol, const nodemask_t *nodes) else if (pol->flags & MPOL_F_RELATIVE_NODES) mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); else { - nodes_remap(tmp, pol->v.nodes,pol->w.cpuset_mems_allowed, - *nodes); + nodes_remap(tmp, pol->nodes, pol->w.cpuset_mems_allowed, + *nodes); pol->w.cpuset_mems_allowed =3D *nodes; } =20 if (nodes_empty(tmp)) tmp =3D *nodes; =20 - pol->v.nodes =3D tmp; + pol->nodes =3D tmp; } =20 static void mpol_rebind_preferred_common(struct mempolicy *pol, @@ -369,17 +369,17 @@ static void mpol_rebind_preferred_common(struct mem= policy *pol, int node =3D first_node(pol->w.user_nodemask); =20 if (node_isset(node, *nodes)) { - pol->v.preferred_nodes =3D nodemask_of_node(node); + pol->nodes =3D nodemask_of_node(node); pol->flags &=3D ~MPOL_F_LOCAL; } else pol->flags |=3D MPOL_F_LOCAL; } else if (pol->flags & MPOL_F_RELATIVE_NODES) { mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); - pol->v.preferred_nodes =3D tmp; + pol->nodes =3D tmp; } else if (!(pol->flags & MPOL_F_LOCAL)) { - nodes_remap(tmp, pol->v.preferred_nodes, - pol->w.cpuset_mems_allowed, *preferred_nodes); - pol->v.preferred_nodes =3D tmp; + nodes_remap(tmp, pol->nodes, pol->w.cpuset_mems_allowed, + *preferred_nodes); + pol->nodes =3D tmp; pol->w.cpuset_mems_allowed =3D *nodes; } } @@ -949,14 +949,14 @@ static void get_policy_nodemask(struct mempolicy *p= , nodemask_t *nodes) switch (p->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - *nodes =3D p->v.nodes; + *nodes =3D p->nodes; break; case MPOL_PREFERRED_MANY: - *nodes =3D p->v.preferred_nodes; + *nodes =3D p->nodes; break; case MPOL_PREFERRED: if (!(p->flags & MPOL_F_LOCAL)) - *nodes =3D p->v.preferred_nodes; + *nodes =3D p->nodes; /* else return empty node mask for local allocation */ break; default: @@ -1042,7 +1042,7 @@ static long do_get_mempolicy(int *policy, nodemask_= t *nmask, *policy =3D err; } else if (pol =3D=3D current->mempolicy && pol->mode =3D=3D MPOL_INTERLEAVE) { - *policy =3D next_node_in(current->il_prev, pol->v.nodes); + *policy =3D next_node_in(current->il_prev, pol->nodes); } else { err =3D -EINVAL; goto out; @@ -1898,14 +1898,14 @@ static int apply_policy_zone(struct mempolicy *po= licy, enum zone_type zone) BUG_ON(dynamic_policy_zone =3D=3D ZONE_MOVABLE); =20 /* - * if policy->v.nodes has movable memory only, + * if policy->nodes has movable memory only, * we apply policy when gfp_zone(gfp) =3D ZONE_MOVABLE only. * - * policy->v.nodes is intersect with node_states[N_MEMORY]. + * policy->nodes is intersect with node_states[N_MEMORY]. * so if the following test faile, it implies - * policy->v.nodes has movable memory only. + * policy->nodes has movable memory only. */ - if (!nodes_intersects(policy->v.nodes, node_states[N_HIGH_MEMORY])) + if (!nodes_intersects(policy->nodes, node_states[N_HIGH_MEMORY])) dynamic_policy_zone =3D ZONE_MOVABLE; =20 return zone >=3D dynamic_policy_zone; @@ -1919,9 +1919,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempo= licy *policy) { /* Lower zones don't get a nodemask applied for MPOL_BIND */ if (unlikely(policy->mode =3D=3D MPOL_BIND) && - apply_policy_zone(policy, gfp_zone(gfp)) && - cpuset_nodemask_valid_mems_allowed(&policy->v.nodes)) - return &policy->v.nodes; + apply_policy_zone(policy, gfp_zone(gfp)) && + cpuset_nodemask_valid_mems_allowed(&policy->nodes)) + return &policy->nodes; =20 return NULL; } @@ -1932,7 +1932,7 @@ static int policy_node(gfp_t gfp, struct mempolicy = *policy, int nd) if ((policy->mode =3D=3D MPOL_PREFERRED || policy->mode =3D=3D MPOL_PREFERRED_MANY) && !(policy->flags & MPOL_F_LOCAL)) { - nd =3D first_node(policy->v.preferred_nodes); + nd =3D first_node(policy->nodes); } else { /* * __GFP_THISNODE shouldn't even be used with the bind policy @@ -1951,7 +1951,7 @@ static unsigned interleave_nodes(struct mempolicy *= policy) unsigned next; struct task_struct *me =3D current; =20 - next =3D next_node_in(me->il_prev, policy->v.nodes); + next =3D next_node_in(me->il_prev, policy->nodes); if (next < MAX_NUMNODES) me->il_prev =3D next; return next; @@ -1979,7 +1979,7 @@ unsigned int mempolicy_slab_node(void) /* * handled MPOL_F_LOCAL above */ - return first_node(policy->v.preferred_nodes); + return first_node(policy->nodes); =20 case MPOL_INTERLEAVE: return interleave_nodes(policy); @@ -1995,7 +1995,7 @@ unsigned int mempolicy_slab_node(void) enum zone_type highest_zoneidx =3D gfp_zone(GFP_KERNEL); zonelist =3D &NODE_DATA(node)->node_zonelists[ZONELIST_FALLBACK]; z =3D first_zones_zonelist(zonelist, highest_zoneidx, - &policy->v.nodes); + &policy->nodes); return z->zone ? zone_to_nid(z->zone) : node; } =20 @@ -2006,12 +2006,12 @@ unsigned int mempolicy_slab_node(void) =20 /* * Do static interleaving for a VMA with known offset @n. Returns the n= 'th - * node in pol->v.nodes (starting from n=3D0), wrapping around if n exce= eds the + * node in pol->nodes (starting from n=3D0), wrapping around if n exceed= s the * number of present nodes. */ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n) { - unsigned nnodes =3D nodes_weight(pol->v.nodes); + unsigned nnodes =3D nodes_weight(pol->nodes); unsigned target; int i; int nid; @@ -2019,9 +2019,9 @@ static unsigned offset_il_node(struct mempolicy *po= l, unsigned long n) if (!nnodes) return numa_node_id(); target =3D (unsigned int)n % nnodes; - nid =3D first_node(pol->v.nodes); + nid =3D first_node(pol->nodes); for (i =3D 0; i < target; i++) - nid =3D next_node(nid, pol->v.nodes); + nid =3D next_node(nid, pol->nodes); return nid; } =20 @@ -2077,7 +2077,7 @@ int huge_node(struct vm_area_struct *vma, unsigned = long addr, gfp_t gfp_flags, } else { nid =3D policy_node(gfp_flags, *mpol, numa_node_id()); if ((*mpol)->mode =3D=3D MPOL_BIND) - *nodemask =3D &(*mpol)->v.nodes; + *nodemask =3D &(*mpol)->nodes; } return nid; } @@ -2110,19 +2110,19 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) mempolicy =3D current->mempolicy; switch (mempolicy->mode) { case MPOL_PREFERRED_MANY: - *mask =3D mempolicy->v.preferred_nodes; + *mask =3D mempolicy->nodes; break; case MPOL_PREFERRED: if (mempolicy->flags & MPOL_F_LOCAL) nid =3D numa_node_id(); else - nid =3D first_node(mempolicy->v.preferred_nodes); + nid =3D first_node(mempolicy->nodes); init_nodemask_of_node(mask, nid); break; =20 case MPOL_BIND: case MPOL_INTERLEAVE: - *mask =3D mempolicy->v.nodes; + *mask =3D mempolicy->nodes; break; =20 default: @@ -2167,11 +2167,11 @@ bool mempolicy_nodemask_intersects(struct task_st= ruct *tsk, */ break; case MPOL_PREFERRED_MANY: - ret =3D nodes_intersects(mempolicy->v.preferred_nodes, *mask); + ret =3D nodes_intersects(mempolicy->nodes, *mask); break; case MPOL_BIND: case MPOL_INTERLEAVE: - ret =3D nodes_intersects(mempolicy->v.nodes, *mask); + ret =3D nodes_intersects(mempolicy->nodes, *mask); break; default: BUG(); @@ -2260,7 +2260,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_are= a_struct *vma, if ((pol->mode =3D=3D MPOL_PREFERRED || pol->mode =3D=3D MPOL_PREFERRED_MANY) && !(pol->flags & MPOL_F_LOCAL)) - hpage_node =3D first_node(pol->v.preferred_nodes); + hpage_node =3D first_node(pol->nodes); =20 nmask =3D policy_nodemask(gfp, pol); if (!nmask || node_isset(hpage_node, *nmask)) { @@ -2394,15 +2394,14 @@ bool __mpol_equal(struct mempolicy *a, struct mem= policy *b) switch (a->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - return !!nodes_equal(a->v.nodes, b->v.nodes); + return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED_MANY: - return !!nodes_equal(a->v.preferred_nodes, - b->v.preferred_nodes); + return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED: /* a's ->flags is the same as b's */ if (a->flags & MPOL_F_LOCAL) return true; - return nodes_equal(a->v.preferred_nodes, b->v.preferred_nodes); + return nodes_equal(a->nodes, b->nodes); default: BUG(); return false; @@ -2546,7 +2545,7 @@ int mpol_misplaced(struct page *page, struct vm_are= a_struct *vma, unsigned long if (pol->flags & MPOL_F_LOCAL) polnid =3D numa_node_id(); else - polnid =3D first_node(pol->v.preferred_nodes); + polnid =3D first_node(pol->nodes); break; =20 case MPOL_BIND: @@ -2557,12 +2556,11 @@ int mpol_misplaced(struct page *page, struct vm_a= rea_struct *vma, unsigned long * else select nearest allowed node, if any. * If no allowed nodes, use current [!misplaced]. */ - if (node_isset(curnid, pol->v.nodes)) + if (node_isset(curnid, pol->nodes)) goto out; - z =3D first_zones_zonelist( - node_zonelist(numa_node_id(), GFP_HIGHUSER), - gfp_zone(GFP_HIGHUSER), - &pol->v.nodes); + z =3D first_zones_zonelist(node_zonelist(numa_node_id(), + GFP_HIGHUSER), + gfp_zone(GFP_HIGHUSER), &pol->nodes); polnid =3D zone_to_nid(z->zone); break; =20 @@ -2763,11 +2761,9 @@ int mpol_set_shared_policy(struct shared_policy *i= nfo, struct sp_node *new =3D NULL; unsigned long sz =3D vma_pages(vma); =20 - pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n", - vma->vm_pgoff, - sz, npol ? npol->mode : -1, - npol ? npol->flags : -1, - npol ? nodes_addr(npol->v.nodes)[0] : NUMA_NO_NODE); + pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n", vma->vm_pgoff, sz, + npol ? npol->mode : -1, npol ? npol->flags : -1, + npol ? nodes_addr(npol->nodes)[0] : NUMA_NO_NODE); =20 if (npol) { new =3D sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol); @@ -2861,11 +2857,11 @@ void __init numa_policy_init(void) 0, SLAB_PANIC, NULL); =20 for_each_node(nid) { - preferred_node_policy[nid] =3D (struct mempolicy) { + preferred_node_policy[nid] =3D (struct mempolicy){ .refcnt =3D ATOMIC_INIT(1), .mode =3D MPOL_PREFERRED, .flags =3D MPOL_F_MOF | MPOL_F_MORON, - .v =3D { .preferred_nodes =3D nodemask_of_node(nid), }, + .nodes =3D nodemask_of_node(nid), }; } =20 @@ -3031,9 +3027,9 @@ int mpol_parse_str(char *str, struct mempolicy **mp= ol) * for /proc/mounts, /proc/pid/mounts and /proc/pid/mountinfo. */ if (mode !=3D MPOL_PREFERRED) - new->v.nodes =3D nodes; + new->nodes =3D nodes; else if (nodelist) - new->v.preferred_nodes =3D nodemask_of_node(first_node(nodes)); + new->nodes =3D nodemask_of_node(first_node(nodes)); else new->flags |=3D MPOL_F_LOCAL; =20 @@ -3089,11 +3085,11 @@ void mpol_to_str(char *buffer, int maxlen, struct= mempolicy *pol) if (flags & MPOL_F_LOCAL) mode =3D MPOL_LOCAL; else - nodes_or(nodes, nodes, pol->v.preferred_nodes); + nodes_or(nodes, nodes, pol->nodes); break; case MPOL_BIND: case MPOL_INTERLEAVE: - nodes =3D pol->v.nodes; + nodes =3D pol->nodes; break; default: WARN_ON_ONCE(1); --=20 2.29.2