From: "Zhang, Haozhong" <haozhong.zhang@intel.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Juergen Gross <jgross@suse.com>, Wei Liu <wei.liu2@citrix.com>,
Ian Campbell <ian.campbell@citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
Jan Beulich <jbeulich@suse.com>,
"Nakajima, Jun" <jun.nakajima@intel.com>,
Xiao Guangrong <guangrong.xiao@linux.intel.com>,
Keir Fraser <keir@xen.org>
Subject: Re: [RFC Design Doc] Add vNVDIMM support for Xen
Date: Tue, 2 Feb 2016 15:39:02 +0800 [thread overview]
Message-ID: <20160202073901.GI6293@hz-desktop.sh.intel.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D15F793F7D@SHSMSX101.ccr.corp.intel.com>
Hi Kevin,
Thanks for your review!
On 02/02/16 14:33, Tian, Kevin wrote:
> > From: Zhang, Haozhong
> > Sent: Monday, February 01, 2016 1:44 PM
> >
> [...]
> >
> > 1.2 ACPI Support
> >
> > ACPI provides two factors of support for NVDIMM. First, NVDIMM
> > devices are described by firmware (BIOS/EFI) to OS via ACPI-defined
> > NVDIMM Firmware Interface Table (NFIT). Second, several functions of
> > NVDIMM, including operations on namespace labels, S.M.A.R.T and
> > hotplug, are provided by ACPI methods (_DSM and _FIT).
> >
> > 1.2.1 NFIT
> >
> > NFIT is a new system description table added in ACPI v6 with
> > signature "NFIT". It contains a set of structures.
>
> Can I consider only NFIT as a minimal requirement, while other stuff
> (_DSM and _FIT) are optional?
>
No. ACPI namespace devices for NVDIMM should also be present. However,
_DSM under those ACPI namespace device can be implemented to support
no functions. _FIT is optional and is used for NVDIMM hotplug.
> >
> >
> > 2. NVDIMM/vNVDIMM Support in Linux Kernel/KVM/QEMU
> >
> > 2.1 NVDIMM Driver in Linux Kernel
> >
> [...]
> >
> > Userspace applications can mmap(2) the whole pmem into its own
> > virtual address space. Linux kernel maps the system physical address
> > space range occupied by pmem into the virtual address space, so that every
> > normal memory loads/writes with proper flushing instructions are
> > applied to the underlying pmem NVDIMM regions.
> >
> > Alternatively, a DAX file system can be made on /dev/pmemX. Files on
> > that file system can be used in the same way as above. As Linux
> > kernel maps the system address space range occupied by those files on
> > NVDIMM to the virtual address space, reads/writes on those files are
> > applied to the underlying NVDIMM regions as well.
>
> Does it mean only file-based interface is supported by Linux today, and
> pmem aware application cannot use normal memory allocation interface
> like malloc for the purpose?
>
right
> >
> > 2.2 vNVDIMM Implementation in KVM/QEMU
> >
> > (1) Address Mapping
> >
> > As described before, the host Linux NVDIMM driver provides a block
> > device interface (/dev/pmem0 at the bottom) for a pmem NVDIMM
> > region. QEMU can than mmap(2) that device into its virtual address
> > space (buf). QEMU is responsible to find a proper guest physical
> > address space range that is large enough to hold /dev/pmem0. Then
> > QEMU passes the virtual address of mmapped buf to a KVM API
> > KVM_SET_USER_MEMORY_REGION that maps in EPT the host physical
> > address range of buf to the guest physical address space range where
> > the virtual pmem device will be.
> >
> > In this way, all guest writes/reads on the virtual pmem device is
> > applied directly to the host one.
> >
> > Besides, above implementation also allows to back a virtual pmem
> > device by a mmapped regular file or a piece of ordinary ram.
>
> What's the point of backing pmem with ordinary ram? I can buy-in
> the value of file-backed option which although slower does sustain
> the persistency attribute. However with ram-backed method there's
> no persistency so violates guest expectation.
>
Well, it is not a necessity. The current vNVDIMM implementation in
QEMU uses dimm in QEMU that happens to support ram backend. A possible
usage is for debugging vNVDIMM on machines without NVDIMM.
> btw, how is persistency guaranteed in KVM/QEMU, cross guest
> power off/on? I guess since Qemu process is killed the allocated pmem
> will be freed so you may switch to file-backed method to keep
> persistency (however copy would take time for large pmem trunk). Or
> will you find some way to keep pmem managed separated from qemu
> qemu life-cycle (then pmem is not efficiently reused)?
>
It all depends on guests themselves. clwb/clflushopt/pcommit
instructions are exposed to guest that are used by guests to make
writes to pmem persistent.
Haozhong
> > 3. Design of vNVDIMM in Xen
> >
> > 3.2 Address Mapping
> >
> > 3.2.2 Alternative Design
> >
> > Jan Beulich's comments [7] on my question "why must pmem resource
> > management and partition be done in hypervisor":
> > | Because that's where memory management belongs. And PMEM,
> > | other than PBLK, is just another form of RAM.
> > | ...
> > | The main issue is that this would imo be a layering violation
> >
> > George Dunlap's comments [8]:
> > | This is not the case for PMEM. The whole point of PMEM (correct me if
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ used as fungible ram
> > | I'm wrong) is to be used for long-term storage that survives over
> > | reboot. It matters very much that a guest be given the same PRAM
> > | after the host is rebooted that it was given before. It doesn't make
> > | any sense to manage it the way Xen currently manages RAM (i.e., that
> > | you request a page and get whatever Xen happens to give you).
> > |
> > | So if Xen is going to use PMEM, it will have to invent an entirely new
> > | interface for guests, and it will have to keep track of those
> > | resources across host reboots. In other words, it will have to
> > | duplicate all the work that Linux already does. What do we gain from
> > | that duplication? Why not just leverage what's already implemented in
> > | dom0?
> > and [9]:
> > | Oh, right -- yes, if the usage model of PRAM is just "cheap slow RAM",
> > | then you're right -- it is just another form of RAM, that should be
> > | treated no differently than say, lowmem: a fungible resource that can be
> > | requested by setting a flag.
> >
> > However, pmem is used more as persistent storage than fungible ram,
> > and my design is for the former usage. I would like to leave the
> > detection, driver and partition (either through namespace or file
> > systems) of NVDIMM in Dom0 Linux kernel.
>
> After reading the whole introduction I vote for this option too. One immediate
> reason why a resource should be managed in Xen, is whether Xen itself also
> uses it, e.g. normal RAM. In that case Xen has to control the whole resource to
> protect itself from Dom0 and other user VMs. Given a resource not used by
> Xen completely, it's reasonable to leave it to Dom0 which reduces code duplication
> and unnecessary maintenance burden in Xen side, as we have done for whole
> PCI sub-system and other I/O peripherals. I'm not sure whether there's future
> value to use pmem in Xen itself, at least for now the primary requirement is
> about exposing pmem to guest. From that angle reusing NVDIMM driver in
> Dom0 looks the better choice with less enabling effort to catch up with KVM.
>
> Thanks
> Kevin
next prev parent reply other threads:[~2016-02-02 7:39 UTC|newest]
Thread overview: 121+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-01 5:44 [RFC Design Doc] Add vNVDIMM support for Xen Haozhong Zhang
2016-02-01 18:25 ` Andrew Cooper
2016-02-02 3:27 ` Tian, Kevin
2016-02-02 3:44 ` Haozhong Zhang
2016-02-02 11:09 ` Andrew Cooper
2016-02-02 6:33 ` Tian, Kevin
2016-02-02 7:39 ` Zhang, Haozhong [this message]
2016-02-02 7:48 ` Tian, Kevin
2016-02-02 7:53 ` Zhang, Haozhong
2016-02-02 8:03 ` Tian, Kevin
2016-02-02 8:49 ` Zhang, Haozhong
2016-02-02 19:01 ` Konrad Rzeszutek Wilk
2016-02-02 17:11 ` Stefano Stabellini
2016-02-03 7:00 ` Haozhong Zhang
2016-02-03 9:13 ` Jan Beulich
2016-02-03 14:09 ` Andrew Cooper
2016-02-03 14:23 ` Haozhong Zhang
2016-02-05 14:40 ` Ross Philipson
2016-02-06 1:43 ` Haozhong Zhang
2016-02-06 16:17 ` Ross Philipson
2016-02-03 12:02 ` Stefano Stabellini
2016-02-03 13:11 ` Haozhong Zhang
2016-02-03 14:20 ` Andrew Cooper
2016-02-04 3:10 ` Haozhong Zhang
2016-02-03 15:16 ` George Dunlap
2016-02-03 15:22 ` Stefano Stabellini
2016-02-03 15:35 ` Konrad Rzeszutek Wilk
2016-02-03 15:35 ` George Dunlap
2016-02-04 2:55 ` Haozhong Zhang
2016-02-04 12:24 ` Stefano Stabellini
2016-02-15 3:16 ` Zhang, Haozhong
2016-02-16 11:14 ` Stefano Stabellini
2016-02-16 12:55 ` Jan Beulich
2016-02-17 9:03 ` Haozhong Zhang
2016-03-04 7:30 ` Haozhong Zhang
2016-03-16 12:55 ` Haozhong Zhang
2016-03-16 13:13 ` Konrad Rzeszutek Wilk
2016-03-16 13:16 ` Jan Beulich
2016-03-16 13:55 ` Haozhong Zhang
2016-03-16 14:23 ` Jan Beulich
2016-03-16 14:55 ` Haozhong Zhang
2016-03-16 15:23 ` Jan Beulich
2016-03-17 8:58 ` Haozhong Zhang
2016-03-17 11:04 ` Jan Beulich
2016-03-17 12:44 ` Haozhong Zhang
2016-03-17 12:59 ` Jan Beulich
2016-03-17 13:29 ` Haozhong Zhang
2016-03-17 13:52 ` Jan Beulich
2016-03-17 14:00 ` Ian Jackson
2016-03-17 14:21 ` Haozhong Zhang
2016-03-29 8:47 ` Haozhong Zhang
2016-03-29 9:11 ` Jan Beulich
2016-03-29 10:10 ` Haozhong Zhang
2016-03-29 10:49 ` Jan Beulich
2016-04-08 5:02 ` Haozhong Zhang
2016-04-08 15:52 ` Jan Beulich
2016-04-12 8:45 ` Haozhong Zhang
2016-04-21 5:09 ` Haozhong Zhang
2016-04-21 7:04 ` Jan Beulich
2016-04-22 2:36 ` Haozhong Zhang
2016-04-22 8:24 ` Jan Beulich
2016-04-22 10:16 ` Haozhong Zhang
2016-04-22 10:53 ` Jan Beulich
2016-04-22 12:26 ` Haozhong Zhang
2016-04-22 12:36 ` Jan Beulich
2016-04-22 12:54 ` Haozhong Zhang
2016-04-22 13:22 ` Jan Beulich
2016-03-17 13:32 ` Konrad Rzeszutek Wilk
2016-02-03 15:47 ` Konrad Rzeszutek Wilk
2016-02-04 2:36 ` Haozhong Zhang
2016-02-15 9:04 ` Zhang, Haozhong
2016-02-02 19:15 ` Konrad Rzeszutek Wilk
2016-02-03 8:28 ` Haozhong Zhang
2016-02-03 9:18 ` Jan Beulich
2016-02-03 12:22 ` Haozhong Zhang
2016-02-03 12:38 ` Jan Beulich
2016-02-03 12:49 ` Haozhong Zhang
2016-02-03 14:30 ` Andrew Cooper
2016-02-03 14:39 ` Jan Beulich
2016-02-15 8:43 ` Haozhong Zhang
2016-02-15 11:07 ` Jan Beulich
2016-02-17 9:01 ` Haozhong Zhang
2016-02-17 9:08 ` Jan Beulich
2016-02-18 7:42 ` Haozhong Zhang
2016-02-19 2:14 ` Konrad Rzeszutek Wilk
2016-03-01 7:39 ` Haozhong Zhang
2016-03-01 18:33 ` Ian Jackson
2016-03-01 18:49 ` Konrad Rzeszutek Wilk
2016-03-02 7:14 ` Haozhong Zhang
2016-03-02 13:03 ` Jan Beulich
2016-03-04 2:20 ` Haozhong Zhang
2016-03-08 9:15 ` Haozhong Zhang
2016-03-08 9:27 ` Jan Beulich
2016-03-09 12:22 ` Haozhong Zhang
2016-03-09 16:17 ` Jan Beulich
2016-03-10 3:27 ` Haozhong Zhang
2016-03-17 11:05 ` Ian Jackson
2016-03-17 13:37 ` Haozhong Zhang
2016-03-17 13:56 ` Jan Beulich
2016-03-17 14:22 ` Haozhong Zhang
2016-03-17 14:12 ` Xu, Quan
2016-03-17 14:22 ` Zhang, Haozhong
2016-03-07 20:53 ` Konrad Rzeszutek Wilk
2016-03-08 5:50 ` Haozhong Zhang
2016-02-18 17:17 ` Jan Beulich
2016-02-24 13:28 ` Haozhong Zhang
2016-02-24 14:00 ` Ross Philipson
2016-02-24 16:42 ` Haozhong Zhang
2016-02-24 17:50 ` Ross Philipson
2016-02-24 14:24 ` Jan Beulich
2016-02-24 15:48 ` Haozhong Zhang
2016-02-24 16:54 ` Jan Beulich
2016-02-28 14:48 ` Haozhong Zhang
2016-02-29 9:01 ` Jan Beulich
2016-02-29 9:45 ` Haozhong Zhang
2016-02-29 10:12 ` Jan Beulich
2016-02-29 11:52 ` Haozhong Zhang
2016-02-29 12:04 ` Jan Beulich
2016-02-29 12:22 ` Haozhong Zhang
2016-03-01 13:51 ` Ian Jackson
2016-03-01 15:04 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160202073901.GI6293@hz-desktop.sh.intel.com \
--to=haozhong.zhang@intel.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=guangrong.xiao@linux.intel.com \
--cc=ian.campbell@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=jun.nakajima@intel.com \
--cc=keir@xen.org \
--cc=kevin.tian@intel.com \
--cc=stefano.stabellini@eu.citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).