All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Dan Williams <dan.j.williams@intel.com>,
	xen-devel@lists.xen.org,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains
Date: Thu, 30 Mar 2017 09:01:14 -0700	[thread overview]
Message-ID: <CAPcyv4iqOHVYnDcU96G-VCrmHk9vK9gUgmaGZMpOMBZnQRrRWg@mail.gmail.com> (raw)
In-Reply-To: <20170330082136.pqa45bjb4omitzdy@hz-desktop>

On Thu, Mar 30, 2017 at 1:21 AM, Haozhong Zhang
<haozhong.zhang@intel.com> wrote:
> On 03/29/17 21:20 -0700, Dan Williams wrote:
>> On Sun, Mar 19, 2017 at 5:09 PM, Haozhong Zhang
>> <haozhong.zhang@intel.com> wrote:
>> > This is v2 RFC patch series to add vNVDIMM support to HVM domains.
>> > v1 can be found at https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg00424.html.
>> >
>> > No label and no _DSM except function 0 "query implemented functions"
>> > is supported by this version, but they will be added by future patches.
>> >
>> > The corresponding Qemu patch series is sent in another thread
>> > "[RFC QEMU PATCH v2 00/10] Implement vNVDIMM for Xen HVM guest".
>> >
>> > All patch series can be found at
>> >   Xen:  https://github.com/hzzhan9/xen.git nvdimm-rfc-v2
>> >   Qemu: https://github.com/hzzhan9/qemu.git xen-nvdimm-rfc-v2
>> >
>> > Changes in v2
>> > ==============
>> >
>> > - One of the primary changes in v2 is dropping the linux kernel
>> >   patches, which were used to reserve on host pmem for placing its
>> >   frametable and M2P table. In v2, we add a management tool xen-ndctl
>> >   which is used in Dom0 to notify Xen hypervisor of which storage can
>> >   be used to manage the host pmem.
>> >
>> >   For example,
>> >   1.   xen-ndctl setup 0x240000 0x380000 0x380000 0x3c0000
>> >     tells Xen hypervisor to use host pmem pages at MFN 0x380000 ~
>> >     0x3c0000 to manage host pmem pages at MFN 0x240000 ~ 0x380000.
>> >     I.e. the former is used to place the frame table and M2P table of
>> >     both ranges of pmem pages.
>> >
>> >   2.   xen-ndctl setup 0x240000 0x380000
>> >     tells Xen hypervisor to use the regular RAM to manage the host
>> >     pmem pages at MFN 0x240000 ~ 0x380000. I.e the regular RMA is used
>> >     to place the frame table and M2P table.
>> >
>> > - Another primary change in v2 is dropping the support to map files on
>> >   the host pmem to HVM domains as virtual NVDIMMs, as I cannot find a
>> >   stable to fix the fiemap of host files. Instead, we can rely on the
>> >   ability added in Linux kernel v4.9 that enables creating multiple
>> >   pmem namespaces on a single nvdimm interleave set.
>>
>> This restriction is unfortunate, and it seems to limit the future
>> architecture of the pmem driver. We may not always be able to
>> guarantee a contiguous physical address range to Xen for a given
>> namespace and may want to concatenate disjoint physical address ranges
>> into a logically contiguous namespace.
>>
>
> The hypervisor code that actual maps host pmem address to guest does
> not require the host address be contiguous. We can modify the
> toolstack code that get the address range from a namespace to support
> passing multiple address ranges to Xen hypervisor
>
>> Is there a resource I can read more about why the hypervisor needs to
>> have this M2P mapping for nvdimm support?
>
> M2P is basically an array of frame numbers. It's indexed by the host
> page frame number, or the machine frame number (MFN) in Xen's
> definition. The n'th entry records the guest page frame number that is
> mapped to MFN n. M2P is one of the core data structures used in Xen
> memory management, and is used to convert MFN to guest PFN. A
> read-only version of M2P is also exposed as part of ABI to guest. In
> the previous design discussion, we decided to put the management of
> NVDIMM in the existing Xen memory management as much as possible, so
> we need to build M2P for NVDIMM as well.
>

Thanks, but what I don't understand is why this M2P lookup is needed?
Does Xen establish this metadata for PCI mmio ranges as well? What Xen
memory management operations does this enable? Sorry if these are
basic Xen questions, I'm just looking to see if we can make the
mapping support more dynamic. For example, what if we wanted to change
the MFN to guest PFN relationship after every fault?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-03-30 16:01 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-20  0:09 [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 01/15] xen/common: add Kconfig item for pmem support Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 02/15] xen: probe pmem regions via ACPI NFIT Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 03/15] xen/x86: allow customizing locations of extended frametable & M2P Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 04/15] xen/x86: add XEN_SYSCTL_nvdimm_pmem_setup to setup host pmem Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 05/15] xen/x86: add XENMEM_populate_pmem_map to map host pmem pages to HVM domain Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 06/15] tools: reserve guest memory for ACPI from device model Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 07/15] tools/libacpi: expose the minimum alignment used by mem_ops.alloc Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 08/15] tools/libacpi: add callback acpi_ctxt.p2v to get a pointer from physical address Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 09/15] tools/libacpi: add callbacks to access XenStore Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 10/15] tools/libacpi: add a simple AML builder Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 11/15] tools/libacpi: load ACPI built by the device model Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 12/15] tools/libxl: build qemu options from xl vNVDIMM configs Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 13/15] tools/libxl: add support to map host pmem device to guests Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 14/15] tools/libxl: initiate pmem mapping via qmp callback Haozhong Zhang
2017-03-20  0:09 ` [RFC XEN PATCH v2 15/15] tools/misc: add xen-ndctl Haozhong Zhang
2017-03-30  4:11   ` Dan Williams
2017-03-30  7:58     ` Haozhong Zhang
2017-04-01 11:55       ` Konrad Rzeszutek Wilk
2017-03-30  4:20 ` [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains Dan Williams
2017-03-30  8:21   ` Haozhong Zhang
2017-03-30 16:01     ` Dan Williams [this message]
2017-04-01 11:54       ` Konrad Rzeszutek Wilk
2017-04-01 15:45         ` Dan Williams
2017-04-04 17:00           ` Konrad Rzeszutek Wilk
2017-04-04 17:16             ` Dan Williams
2017-04-04 17:34               ` Konrad Rzeszutek Wilk
2017-04-04 17:59                 ` Dan Williams
2017-04-04 18:05                   ` Konrad Rzeszutek Wilk
2017-04-04 18:59                     ` Dan Williams
2017-04-11 17:48                       ` Konrad Rzeszutek Wilk
2017-04-01 12:24 ` Konrad Rzeszutek Wilk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPcyv4iqOHVYnDcU96G-VCrmHk9vK9gUgmaGZMpOMBZnQRrRWg@mail.gmail.com \
    --to=dan.j.williams@intel.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dgdegra@tycho.nsa.gov \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=konrad@darnok.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.