From: Michal Hocko <mhocko@suse.com>
To: Feng Tang <feng.tang@intel.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
David Rientjes <rientjes@google.com>,
Dave Hansen <dave.hansen@intel.com>,
Ben Widawsky <ben.widawsky@intel.com>,
linux-kernel@vger.kernel.org,
Andrea Arcangeli <aarcange@redhat.com>,
Mel Gorman <mgorman@techsingularity.net>,
Mike Kravetz <mike.kravetz@oracle.com>,
Randy Dunlap <rdunlap@infradead.org>,
Vlastimil Babka <vbabka@suse.cz>, Andi Kleen <ak@linux.intel.com>,
Dan Williams <dan.j.williams@intel.com>,
ying.huang@intel.com
Subject: Re: [PATCH v1 4/4] mm/mempolicy: kill MPOL_F_LOCAL bit
Date: Thu, 27 May 2021 10:20:08 +0200 [thread overview]
Message-ID: <YK9WOKBRsaFESPfR@dhcp22.suse.cz> (raw)
In-Reply-To: <1622005302-23027-5-git-send-email-feng.tang@intel.com>
On Wed 26-05-21 13:01:42, Feng Tang wrote:
> Now the only remaining case of a real 'local' policy faked by
> 'prefer' policy plus MPOL_F_LOCAL bit is:
>
> A valid 'prefer' policy with a valid 'preferred' node is 'rebind'
> to a nodemask which doesn't contains the 'preferred' node, then it
> will handle allocation with 'local' policy.
>
> Add a new 'MPOL_F_LOCAL_TEMP' bit for this case, and kill the
> MPOL_F_LOCAL bit, which could simplify the code much.
As I've pointed out in the reply to the previous patch. It would have
been much better if most of the MPOL_F_LOCAL usage was gone by this
patch.
I also dislike a new MPOL_F_LOCAL_TEMP. This smells like sneaking the
hack back in after you have painstakingly removed it. So this looks like
a step backwards to me. I also do not understand why do we need the
rebind callback for local policy at all. There is no node mask for local
so what is going on here?
[...]
> +static void mpol_rebind_local(struct mempolicy *pol,
> + const nodemask_t *nodes)
> +{
> + if (unlikely(pol->flags & MPOL_F_STATIC_NODES)) {
> + int node = first_node(pol->w.user_nodemask);
> +
> + BUG_ON(!(pol->flags & MPOL_F_LOCAL_TEMP));
> +
> + if (node_isset(node, *nodes)) {
> + pol->v.preferred_node = node;
> + pol->mode = MPOL_PREFERRED;
> + pol->flags &= ~MPOL_F_LOCAL_TEMP;
> + }
> + }
> +}
> +
I have to confess I've got lost here. Could you explain why do you need
all this handling for a local policy?
> static void mpol_rebind_preferred(struct mempolicy *pol,
> const nodemask_t *nodes)
> {
> @@ -347,13 +363,19 @@ static void mpol_rebind_preferred(struct mempolicy *pol,
>
> if (node_isset(node, *nodes)) {
> pol->v.preferred_node = node;
> - pol->flags &= ~MPOL_F_LOCAL;
> - } else
> - pol->flags |= MPOL_F_LOCAL;
> + } else {
> + /*
> + * If there is no valid node, change the mode to
> + * MPOL_LOCAL, which will be restored back when
> + * next rebind() sees a valid node.
> + */
> + pol->mode = MPOL_LOCAL;
> + pol->flags |= MPOL_F_LOCAL_TEMP;
> + }
> } else if (pol->flags & MPOL_F_RELATIVE_NODES) {
> mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes);
> pol->v.preferred_node = first_node(tmp);
> - } else if (!(pol->flags & MPOL_F_LOCAL)) {
> + } else {
> pol->v.preferred_node = node_remap(pol->v.preferred_node,
> pol->w.cpuset_mems_allowed,
> *nodes);
> @@ -372,7 +394,7 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask)
> {
> if (!pol)
> return;
> - if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) &&
> + if (!mpol_store_user_nodemask(pol) &&
> nodes_equal(pol->w.cpuset_mems_allowed, *newmask))
> return;
>
> @@ -425,7 +447,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
> },
> [MPOL_LOCAL] = {
> .create = mpol_new_local,
> - .rebind = mpol_rebind_default,
> + .rebind = mpol_rebind_local,
> },
> };
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2021-05-27 8:20 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-26 5:01 [PATCH v1 0/4] mm/mempolicy: some fix and semantics cleanup Feng Tang
2021-05-26 5:01 ` [PATCH v1 1/4] mm/mempolicy: skip nodemask intersect check for 'interleave' when oom Feng Tang
2021-05-27 7:30 ` Michal Hocko
2021-05-27 13:05 ` Feng Tang
2021-05-27 13:15 ` Michal Hocko
2021-05-27 13:22 ` Feng Tang
2021-05-26 5:01 ` [PATCH v1 2/4] mm/mempolicy: unify the preprocessing for mbind and set_mempolicy Feng Tang
2021-05-27 7:39 ` Michal Hocko
2021-05-27 12:31 ` Feng Tang
2021-05-26 5:01 ` [PATCH v1 3/4] mm/mempolicy: don't handle MPOL_LOCAL like a fake MPOL_PREFERRED policy Feng Tang
2021-05-27 8:12 ` Michal Hocko
2021-05-27 12:06 ` Feng Tang
2021-05-27 12:16 ` Michal Hocko
2021-05-26 5:01 ` [PATCH v1 4/4] mm/mempolicy: kill MPOL_F_LOCAL bit Feng Tang
2021-05-27 8:20 ` Michal Hocko [this message]
2021-05-27 12:10 ` Feng Tang
2021-05-27 12:26 ` Michal Hocko
2021-05-27 13:34 ` Feng Tang
2021-05-27 15:34 ` Michal Hocko
2021-05-28 4:39 ` Feng Tang
2021-05-31 7:00 ` Michal Hocko
2021-05-31 7:32 ` Feng Tang
2021-05-31 8:22 ` Michal Hocko
2021-05-31 8:29 ` Feng Tang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YK9WOKBRsaFESPfR@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=aarcange@redhat.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=ben.widawsky@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=feng.tang@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mike.kravetz@oracle.com \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=vbabka@suse.cz \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).