All of lore.kernel.org
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Wei Liu <wei.liu2@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains
Date: Tue, 4 Apr 2017 13:00:10 -0400	[thread overview]
Message-ID: <20170404170010.GC24507@char.us.oracle.com> (raw)
In-Reply-To: <CAPcyv4gm618KBqy6hZEEk9aP6hy_cZDhvR5=Tnd-tTci3HnDWw@mail.gmail.com>

On Sat, Apr 01, 2017 at 08:45:45AM -0700, Dan Williams wrote:
> On Sat, Apr 1, 2017 at 4:54 AM, Konrad Rzeszutek Wilk <konrad@darnok.org> wrote:
> > ..snip..
> >> >> Is there a resource I can read more about why the hypervisor needs to
> >> >> have this M2P mapping for nvdimm support?
> >> >
> >> > M2P is basically an array of frame numbers. It's indexed by the host
> >> > page frame number, or the machine frame number (MFN) in Xen's
> >> > definition. The n'th entry records the guest page frame number that is
> >> > mapped to MFN n. M2P is one of the core data structures used in Xen
> >> > memory management, and is used to convert MFN to guest PFN. A
> >> > read-only version of M2P is also exposed as part of ABI to guest. In
> >> > the previous design discussion, we decided to put the management of
> >> > NVDIMM in the existing Xen memory management as much as possible, so
> >> > we need to build M2P for NVDIMM as well.
> >> >
> >>
> >> Thanks, but what I don't understand is why this M2P lookup is needed?
> >
> > Xen uses it to construct the EPT page tables for the guests.
> >
> >> Does Xen establish this metadata for PCI mmio ranges as well? What Xen
> >
> > It doesn't have that (M2P) for PCI MMIO ranges. For those it has an
> > ranges construct (since those are usually contingous and given
> > in ranges to a guest).
> 
> So, I'm confused again. This patchset / enabling requires both M2P and
> contiguous PMEM ranges. If the PMEM is contiguous it seems you don't
> need M2P and can just reuse the MMIO enabling, or am I missing
> something?

I think I am confusing you.

The patchset (specifically 04/15] xen/x86: add XEN_SYSCTL_nvdimm_pmem_setup to setup host pmem )
adds a hypercall to tell Xen where on the NVDIMM it can put 
the M2P array and as well the frametables ('struct page').

There is no range support. The reason is that if break up
an NVDIMM in various chunks (and then put a filesystem on top of it) - and
then figure out which of the SPAs belong to the file - and then
"expose" that file to a guest as NVDIMM - it's SPAs won't
be contingous. Hence the hypervisor would need to break down
the 'ranges' structure down in either a bitmap or an M2P
and also store it. This can get quite tricky so you may
as well just start with an M2P and 'struct page'.

The placement of those datastructures is "
v2 patch
   series relies on users/admins in Dom0 instead of Dom0 driver to indicate the
   location to store the frametable and M2P of pmem.
"

Hope this helps?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-04-04 17:00 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-20  0:09 [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 01/15] xen/common: add Kconfig item for pmem support Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 02/15] xen: probe pmem regions via ACPI NFIT Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 03/15] xen/x86: allow customizing locations of extended frametable & M2P Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 04/15] xen/x86: add XEN_SYSCTL_nvdimm_pmem_setup to setup host pmem Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 05/15] xen/x86: add XENMEM_populate_pmem_map to map host pmem pages to HVM domain Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 06/15] tools: reserve guest memory for ACPI from device model Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 07/15] tools/libacpi: expose the minimum alignment used by mem_ops.alloc Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 08/15] tools/libacpi: add callback acpi_ctxt.p2v to get a pointer from physical address Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 09/15] tools/libacpi: add callbacks to access XenStore Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 10/15] tools/libacpi: add a simple AML builder Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 11/15] tools/libacpi: load ACPI built by the device model Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 12/15] tools/libxl: build qemu options from xl vNVDIMM configs Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 13/15] tools/libxl: add support to map host pmem device to guests Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 14/15] tools/libxl: initiate pmem mapping via qmp callback Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 15/15] tools/misc: add xen-ndctl Haozhong Zhang
2017-03-30  4:11   ` Dan Williams
2017-03-30  7:58     ` Haozhong Zhang
2017-04-01 11:55       ` Konrad Rzeszutek Wilk
2017-03-30  4:20 ` [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains Dan Williams
2017-03-30  8:21   ` Haozhong Zhang
2017-03-30 16:01     ` Dan Williams
2017-04-01 11:54       ` Konrad Rzeszutek Wilk
2017-04-01 15:45         ` Dan Williams
2017-04-04 17:00           ` Konrad Rzeszutek Wilk [this message]
2017-04-04 17:16             ` Dan Williams
2017-04-04 17:34               ` Konrad Rzeszutek Wilk
2017-04-04 17:59                 ` Dan Williams
2017-04-04 18:05                   ` Konrad Rzeszutek Wilk
2017-04-04 18:59                     ` Dan Williams
2017-04-11 17:48                       ` Konrad Rzeszutek Wilk
2017-04-01 12:24 ` Konrad Rzeszutek Wilk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170404170010.GC24507@char.us.oracle.com \
    --to=konrad.wilk@oracle.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dan.j.williams@intel.com \
    --cc=dgdegra@tycho.nsa.gov \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=konrad@darnok.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.