From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54604) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aVGQa-0003lI-5P for qemu-devel@nongnu.org; Mon, 15 Feb 2016 05:30:50 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aVGQY-0005kK-Mb for qemu-devel@nongnu.org; Mon, 15 Feb 2016 05:30:47 -0500 Received: from mx1.redhat.com ([209.132.183.28]:56537) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aVGQY-0005kE-Ei for qemu-devel@nongnu.org; Mon, 15 Feb 2016 05:30:46 -0500 Date: Mon, 15 Feb 2016 12:30:41 +0200 From: "Michael S. Tsirkin" Message-ID: <20160215122458-mutt-send-email-mst@redhat.com> References: <1452624610-46945-1-git-send-email-guangrong.xiao@linux.intel.com> <1452624610-46945-7-git-send-email-guangrong.xiao@linux.intel.com> <20160208120337.44720b0b@nial.brq.redhat.com> <56C01747.3030102@linux.intel.com> <20160215101105.47e9245a@nial.brq.redhat.com> <20160215111705-mutt-send-email-mst@redhat.com> <56C1A4D2.3060402@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56C1A4D2.3060402@linux.intel.com> Subject: Re: [Qemu-devel] [PATCH v2 06/11] nvdimm acpi: initialize the resource used by NVDIMM ACPI List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Xiao Guangrong Cc: ehabkost@redhat.com, kvm@vger.kernel.org, gleb@kernel.org, mtosatti@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com, pbonzini@redhat.com, Igor Mammedov , dan.j.williams@intel.com, rth@twiddle.net On Mon, Feb 15, 2016 at 06:13:38PM +0800, Xiao Guangrong wrote: > > > On 02/15/2016 05:18 PM, Michael S. Tsirkin wrote: > >On Mon, Feb 15, 2016 at 10:11:05AM +0100, Igor Mammedov wrote: > >>On Sun, 14 Feb 2016 13:57:27 +0800 > >>Xiao Guangrong wrote: > >> > >>>On 02/08/2016 07:03 PM, Igor Mammedov wrote: > >>>>On Wed, 13 Jan 2016 02:50:05 +0800 > >>>>Xiao Guangrong wrote: > >>>> > >>>>>32 bits IO port starting from 0x0a18 in guest is reserved for NVDIMM > >>>>>ACPI emulation. The table, NVDIMM_DSM_MEM_FILE, will be patched into > >>>>>NVDIMM ACPI binary code > >>>>> > >>>>>OSPM uses this port to tell QEMU the final address of the DSM memory > >>>>>and notify QEMU to emulate the DSM method > >>>>Would you need to pass control to QEMU if each NVDIMM had its whole > >>>>label area MemoryRegion mapped right after its storage MemoryRegion? > >>>> > >>> > >>>No, label data is not mapped into guest's address space and it only > >>>can be accessed by DSM method indirectly. > >>Yep, per spec label data should be accessed via _DSM but question > >>wasn't about it, > > Ah, sorry, i missed your question. > > >>Why would one map only 4Kb window and serialize label data > >>via it if it could be mapped as whole, that way _DMS method will be > >>much less complicated and there won't be need to add/support a protocol > >>for its serialization. > >> > > > >Is it ever accessed on data path? If not I prefer the current approach: > > The label data is only accessed via two DSM commands - Get Namespace Label > Data and Set Namespace Label Data, no other place need to be emulated. > > >limit the window used, the serialization protocol seems rather simple. > > > > Yes. > > Label data is at least 128k which is big enough for BIOS as it allocates > memory at 0 ~ 4G which is tight region. It also needs guest OS to support > lager max-xfer (the max size that can be transferred one time), the size > in current Linux NVDIMM driver is 4k. > > However, using lager DSM buffer can help us to simplify NVDIMM hotplug for > the case that too many nvdimm devices present in the system and their FIT > info can not be filled into one page. Each PMEM-only device needs 0xb8 bytes > and we can append 256 memory devices at most, so 12 pages are needed to > contain this info. The prototype we implemented is using ourself-defined > protocol to read piece of _FIT and concatenate them before return to Guest, > please refer to: > https://github.com/xiaogr/qemu/commit/c46ce01c8433ac0870670304360b3c4aa414143a > > As 12 pages are not small region for BIOS and the _FIT size may be extended in the > future development (eg, if PBLK is introduced) i am not sure if we need this. Of > course, another approach to simplify it is that we limit the number of NVDIMM > device to make sure their _FIT < 4k. > > > Bigger buffer would only be reasonable in 64 bit window (>4G), <4G area is too busy for that. Would that be a problem? -- MST