linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: Jonathan Adams <jwadams@google.com>
Cc: Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	"Williams, Dan J" <dan.j.williams@intel.com>,
	"Verma, Vishal L" <vishal.l.verma@intel.com>,
	Wu Fengguang <fengguang.wu@intel.com>,
	Huang Ying <ying.huang@intel.com>
Subject: Re: [RFC] Memory Tiering
Date: Thu, 24 Oct 2019 09:33:01 -0700	[thread overview]
Message-ID: <bab0848c-3229-bcb5-8921-d150939a7ce2@intel.com> (raw)
In-Reply-To: <CA+VK+GMAqMVXKQqjGzSj9P+-TKr_Jn6qQ1cHSyxhDsoChorm_w@mail.gmail.com>

On 10/23/19 4:11 PM, Jonathan Adams wrote:
> we would have a bidirectional attachment:
> 
> A is marked "move cold pages to" B
> B is marked "move hot pages to" A
> C is marked "move cold pages to" D
> D is marked "move hot pages to" C
> 
> By using autonuma for moving PMEM pages back to DRAM, you avoid
> needing the B->A  & D->C links, at the cost of migrating the pages
> back synchronously at pagefault time (assuming my understanding of how
> autonuma works is accurate).
> 
> Our approach still lets you have multiple levels of hierarchy for a
> given socket (you could imaging an "E" node with the same relation to
> "B" as "B" has to "A"), but doesn't make it easy to represent (say) an
> "E" which was equally close to all sockets (which I could imagine for
> something like remote memory on GenZ or what-have-you), since there
> wouldn't be a single back link; there would need to be something like
> your autonuma support to achieve that.
> 
> Does that make sense?

Yes, it does.  We've actually tried a few other approaches separate from
autonuma-based ones for promotion.  For some of those, we have a
promotion path which is separate from the demotion path.

That said, I took a quick look to see what the autonuma behavior was and
couldn't find anything obvious.  Ying, when moving a slow page due to
autonuma, do we move it close to the CPU that did the access, or do we
promote it to the DRAM close to the slow memory where it is now?

  reply	other threads:[~2019-10-24 16:33 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-16 20:05 [RFC] Memory Tiering Dave Hansen
2019-10-17  8:07 ` David Hildenbrand
2019-10-17 14:17   ` Dave Hansen
2019-10-17 17:07     ` Verma, Vishal L
2019-10-17 17:34       ` David Hildenbrand
2019-10-23 23:11 ` Jonathan Adams
2019-10-24 16:33   ` Dave Hansen [this message]
2019-10-25  3:30     ` Huang, Ying
2019-10-24 17:06   ` Yang Shi
2019-10-25  3:40   ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bab0848c-3229-bcb5-8921-d150939a7ce2@intel.com \
    --to=dave.hansen@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=fengguang.wu@intel.com \
    --cc=jwadams@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=vishal.l.verma@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).