From: "Jan Beulich" <JBeulich@suse.com>
To: Haozhong Zhang <haozhong.zhang@intel.com>
Cc: Juergen Gross <JGross@suse.com>,
Kevin Tian <kevin.tian@intel.com>, Wei Liu <wei.liu2@citrix.com>,
Ian Campbell <ian.campbell@citrix.com>,
Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>,
xen-devel@lists.xen.org, Jun Nakajima <jun.nakajima@intel.com>,
Xiao Guangrong <guangrong.xiao@linux.intel.com>,
Keir Fraser <keir@xen.org>
Subject: Re: [RFC Design Doc] Add vNVDIMM support for Xen
Date: Wed, 24 Feb 2016 07:24:07 -0700 [thread overview]
Message-ID: <56CDCB1702000078000D5B6C@prv-mh.provo.novell.com> (raw)
In-Reply-To: <20160224132855.GA2627@hz-desktop.sh.intel.com>
>>> On 24.02.16 at 14:28, <haozhong.zhang@intel.com> wrote:
> On 02/18/16 10:17, Jan Beulich wrote:
>> >>> On 01.02.16 at 06:44, <haozhong.zhang@intel.com> wrote:
>> > This design treats host NVDIMM devices as ordinary MMIO devices:
>>
>> Wrt the cachability note earlier on, I assume you're aware that with
>> the XSA-154 changes we disallow any cachable mappings of MMIO
>> by default.
>>
>
> EPT entries that map the host NVDIMM SPAs to guest will be the only
> mapping used for NVDIMM. If the memory type in the last level entries is
> always set to the same type reported by NFIT and the ipat bit is always
> set as well, I think it would not suffer from the cache-type
> inconsistency problem in XSA-154?
This assumes Xen knows the NVDIMM address ranges, which so
far you meant to keep out of Xen iirc. But yes, things surely can
be made work, I simply wanted to point out that there are some
caveats.
>> > (1) Dom0 Linux NVDIMM driver is responsible to detect (through NFIT)
>> > and drive host NVDIMM devices (implementing block device
>> > interface). Namespaces and file systems on host NVDIMM devices
>> > are handled by Dom0 Linux as well.
>> >
>> > (2) QEMU mmap(2) the pmem NVDIMM devices (/dev/pmem0) into its
>> > virtual address space (buf).
>> >
>> > (3) QEMU gets the host physical address of buf, i.e. the host system
>> > physical address that is occupied by /dev/pmem0, and calls Xen
>> > hypercall XEN_DOMCTL_memory_mapping to map it to a DomU.
>> >
>> > (ACPI part is described in Section 3.3 later)
>> >
>> > Above (1)(2) have already been done in current QEMU. Only (3) is
>> > needed to implement in QEMU. No change is needed in Xen for address
>> > mapping in this design.
>> >
>> > Open: It seems no system call/ioctl is provided by Linux kernel to
>> > get the physical address from a virtual address.
>> > /proc/<qemu_pid>/pagemap provides information of mapping from
>> > VA to PA. Is it an acceptable solution to let QEMU parse this
>> > file to get the physical address?
>> >
>> > Open: For a large pmem, mmap(2) is very possible to not map all SPA
>> > occupied by pmem at the beginning, i.e. QEMU may not be able to
>> > get all SPA of pmem from buf (in virtual address space) when
>> > calling XEN_DOMCTL_memory_mapping.
>> > Can mmap flag MAP_LOCKED or mlock(2) be used to enforce the
>> > entire pmem being mmaped?
>>
>> A fundamental question I have here is: Why does qemu need to
>> map this at all? It shouldn't itself need to access those ranges,
>> since the guest is given direct access. It would seem quite a bit
>> more natural if qemu simply inquired to underlying GFN range(s)
>> and handed those to Xen for translation to MFNs and mapping
>> into guest space.
>>
>
> The above design is more like a hack on the existing QEMU
> implementation for KVM without modifying the Dom0 kernel.
>
> Maybe it's better to let QEMU to get SPAs from Dom0 kernel (NVDIMM
> driver) and then pass them to Xen for the address mapping:
> (1) QEMU passes fd of /dev/pmemN or file on /dev/pmemN to Dom0 kernel.
> (2) Dom0 kernel finds and returns all SPAs occupied by /dev/pmemN or
> portions of /dev/pmemN where the file sits.
> (3) QEMU passes above SPAs, and GMFN where they will be mapped to Xen
> which maps SPAs to GMFN.
Indeed, and that would also eliminate the second of your Opens
above.
>> > 3.3 Guest ACPI Emulation
>> >
>> > 3.3.1 My Design
>> >
>> > Guest ACPI emulation is composed of two parts: building guest NFIT
>> > and SSDT that defines ACPI namespace devices for NVDIMM, and
>> > emulating guest _DSM.
>> >
>> > (1) Building Guest ACPI Tables
>> >
>> > This design reuses and extends hvmloader's existing mechanism that
>> > loads passthrough ACPI tables from binary files to load NFIT and
>> > SSDT tables built by QEMU:
>> > 1) Because the current QEMU does not building any ACPI tables when
>> > it runs as the Xen device model, this design needs to patch QEMU
>> > to build NFIT and SSDT (so far only NFIT and SSDT) in this case.
>> >
>> > 2) QEMU copies NFIT and SSDT to the end of guest memory below
>> > 4G. The guest address and size of those tables are written into
>> > xenstore (/local/domain/domid/hvmloader/dm-acpi/{address,length}).
>> >
>> > 3) hvmloader is patched to probe and load device model passthrough
>> > ACPI tables from above xenstore keys. The detected ACPI tables
>> > are then appended to the end of existing guest ACPI tables just
>> > like what current construct_passthrough_tables() does.
>> >
>> > Reasons for this design are listed below:
>> > - NFIT and SSDT in question are quite self-contained, i.e. they do
>> > not refer to other ACPI tables and not conflict with existing
>> > guest ACPI tables in Xen. Therefore, it is safe to copy them from
>> > QEMU and append to existing guest ACPI tables.
>>
>> How is this not conflicting being guaranteed? In particular I don't
>> see how tables containing AML code and coming from different
>> sources won't possibly cause ACPI name space collisions.
>>
>
> Really there is no effective mechanism to avoid ACPI name space
> collisions (and other kinds of conflicts) between ACPI tables loaded
> from QEMU and ACPI tables built by hvmloader. Because which ACPI tables
> are loaded is determined by developers, IMO it's developers'
> responsibility to avoid any collisions and conflicts with existing ACPI
> tables.
Right, but this needs to be spelled out and settled on at design
time (i.e. now), rather leaving things unspecified, awaiting the
first clash.
Jan
next prev parent reply other threads:[~2016-02-24 14:24 UTC|newest]
Thread overview: 121+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-01 5:44 [RFC Design Doc] Add vNVDIMM support for Xen Haozhong Zhang
2016-02-01 18:25 ` Andrew Cooper
2016-02-02 3:27 ` Tian, Kevin
2016-02-02 3:44 ` Haozhong Zhang
2016-02-02 11:09 ` Andrew Cooper
2016-02-02 6:33 ` Tian, Kevin
2016-02-02 7:39 ` Zhang, Haozhong
2016-02-02 7:48 ` Tian, Kevin
2016-02-02 7:53 ` Zhang, Haozhong
2016-02-02 8:03 ` Tian, Kevin
2016-02-02 8:49 ` Zhang, Haozhong
2016-02-02 19:01 ` Konrad Rzeszutek Wilk
2016-02-02 17:11 ` Stefano Stabellini
2016-02-03 7:00 ` Haozhong Zhang
2016-02-03 9:13 ` Jan Beulich
2016-02-03 14:09 ` Andrew Cooper
2016-02-03 14:23 ` Haozhong Zhang
2016-02-05 14:40 ` Ross Philipson
2016-02-06 1:43 ` Haozhong Zhang
2016-02-06 16:17 ` Ross Philipson
2016-02-03 12:02 ` Stefano Stabellini
2016-02-03 13:11 ` Haozhong Zhang
2016-02-03 14:20 ` Andrew Cooper
2016-02-04 3:10 ` Haozhong Zhang
2016-02-03 15:16 ` George Dunlap
2016-02-03 15:22 ` Stefano Stabellini
2016-02-03 15:35 ` Konrad Rzeszutek Wilk
2016-02-03 15:35 ` George Dunlap
2016-02-04 2:55 ` Haozhong Zhang
2016-02-04 12:24 ` Stefano Stabellini
2016-02-15 3:16 ` Zhang, Haozhong
2016-02-16 11:14 ` Stefano Stabellini
2016-02-16 12:55 ` Jan Beulich
2016-02-17 9:03 ` Haozhong Zhang
2016-03-04 7:30 ` Haozhong Zhang
2016-03-16 12:55 ` Haozhong Zhang
2016-03-16 13:13 ` Konrad Rzeszutek Wilk
2016-03-16 13:16 ` Jan Beulich
2016-03-16 13:55 ` Haozhong Zhang
2016-03-16 14:23 ` Jan Beulich
2016-03-16 14:55 ` Haozhong Zhang
2016-03-16 15:23 ` Jan Beulich
2016-03-17 8:58 ` Haozhong Zhang
2016-03-17 11:04 ` Jan Beulich
2016-03-17 12:44 ` Haozhong Zhang
2016-03-17 12:59 ` Jan Beulich
2016-03-17 13:29 ` Haozhong Zhang
2016-03-17 13:52 ` Jan Beulich
2016-03-17 14:00 ` Ian Jackson
2016-03-17 14:21 ` Haozhong Zhang
2016-03-29 8:47 ` Haozhong Zhang
2016-03-29 9:11 ` Jan Beulich
2016-03-29 10:10 ` Haozhong Zhang
2016-03-29 10:49 ` Jan Beulich
2016-04-08 5:02 ` Haozhong Zhang
2016-04-08 15:52 ` Jan Beulich
2016-04-12 8:45 ` Haozhong Zhang
2016-04-21 5:09 ` Haozhong Zhang
2016-04-21 7:04 ` Jan Beulich
2016-04-22 2:36 ` Haozhong Zhang
2016-04-22 8:24 ` Jan Beulich
2016-04-22 10:16 ` Haozhong Zhang
2016-04-22 10:53 ` Jan Beulich
2016-04-22 12:26 ` Haozhong Zhang
2016-04-22 12:36 ` Jan Beulich
2016-04-22 12:54 ` Haozhong Zhang
2016-04-22 13:22 ` Jan Beulich
2016-03-17 13:32 ` Konrad Rzeszutek Wilk
2016-02-03 15:47 ` Konrad Rzeszutek Wilk
2016-02-04 2:36 ` Haozhong Zhang
2016-02-15 9:04 ` Zhang, Haozhong
2016-02-02 19:15 ` Konrad Rzeszutek Wilk
2016-02-03 8:28 ` Haozhong Zhang
2016-02-03 9:18 ` Jan Beulich
2016-02-03 12:22 ` Haozhong Zhang
2016-02-03 12:38 ` Jan Beulich
2016-02-03 12:49 ` Haozhong Zhang
2016-02-03 14:30 ` Andrew Cooper
2016-02-03 14:39 ` Jan Beulich
2016-02-15 8:43 ` Haozhong Zhang
2016-02-15 11:07 ` Jan Beulich
2016-02-17 9:01 ` Haozhong Zhang
2016-02-17 9:08 ` Jan Beulich
2016-02-18 7:42 ` Haozhong Zhang
2016-02-19 2:14 ` Konrad Rzeszutek Wilk
2016-03-01 7:39 ` Haozhong Zhang
2016-03-01 18:33 ` Ian Jackson
2016-03-01 18:49 ` Konrad Rzeszutek Wilk
2016-03-02 7:14 ` Haozhong Zhang
2016-03-02 13:03 ` Jan Beulich
2016-03-04 2:20 ` Haozhong Zhang
2016-03-08 9:15 ` Haozhong Zhang
2016-03-08 9:27 ` Jan Beulich
2016-03-09 12:22 ` Haozhong Zhang
2016-03-09 16:17 ` Jan Beulich
2016-03-10 3:27 ` Haozhong Zhang
2016-03-17 11:05 ` Ian Jackson
2016-03-17 13:37 ` Haozhong Zhang
2016-03-17 13:56 ` Jan Beulich
2016-03-17 14:22 ` Haozhong Zhang
2016-03-17 14:12 ` Xu, Quan
2016-03-17 14:22 ` Zhang, Haozhong
2016-03-07 20:53 ` Konrad Rzeszutek Wilk
2016-03-08 5:50 ` Haozhong Zhang
2016-02-18 17:17 ` Jan Beulich
2016-02-24 13:28 ` Haozhong Zhang
2016-02-24 14:00 ` Ross Philipson
2016-02-24 16:42 ` Haozhong Zhang
2016-02-24 17:50 ` Ross Philipson
2016-02-24 14:24 ` Jan Beulich [this message]
2016-02-24 15:48 ` Haozhong Zhang
2016-02-24 16:54 ` Jan Beulich
2016-02-28 14:48 ` Haozhong Zhang
2016-02-29 9:01 ` Jan Beulich
2016-02-29 9:45 ` Haozhong Zhang
2016-02-29 10:12 ` Jan Beulich
2016-02-29 11:52 ` Haozhong Zhang
2016-02-29 12:04 ` Jan Beulich
2016-02-29 12:22 ` Haozhong Zhang
2016-03-01 13:51 ` Ian Jackson
2016-03-01 15:04 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56CDCB1702000078000D5B6C@prv-mh.provo.novell.com \
--to=jbeulich@suse.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=JGross@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=guangrong.xiao@linux.intel.com \
--cc=haozhong.zhang@intel.com \
--cc=ian.campbell@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jun.nakajima@intel.com \
--cc=keir@xen.org \
--cc=kevin.tian@intel.com \
--cc=stefano.stabellini@eu.citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).