All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>,
	Xiao Guangrong <guangrong.xiao@linux.intel.com>,
	ehabkost@redhat.com, KVM list <kvm@vger.kernel.org>,
	Gleb Natapov <gleb@kernel.org>,
	mtosatti@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com,
	Paolo Bonzini <pbonzini@redhat.com>,
	rth@twiddle.net
Subject: Re: [Qemu-devel] [PATCH v2 06/11] nvdimm acpi: initialize the resource used by NVDIMM ACPI
Date: Fri, 19 Feb 2016 00:43:16 -0800	[thread overview]
Message-ID: <CAPcyv4gkQq7w9wH12iwxzwZ6cbaKh2viHvwpaHUKdbBugiXiPg@mail.gmail.com> (raw)
In-Reply-To: <20160219100211-mutt-send-email-mst@redhat.com>

On Fri, Feb 19, 2016 at 12:08 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Thu, Feb 18, 2016 at 11:05:23AM +0100, Igor Mammedov wrote:
>> On Thu, 18 Feb 2016 12:03:36 +0800
>> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>>
>> > On 02/18/2016 01:26 AM, Michael S. Tsirkin wrote:
>> > > On Wed, Feb 17, 2016 at 10:04:18AM +0800, Xiao Guangrong wrote:
>> > >>>>> As for the rest could that commands go via MMIO that we usually
>> > >>>>> use for control path?
>> > >>>>
>> > >>>> So both input data and output data go through single MMIO, we need to
>> > >>>> introduce a protocol to pass these data, that is complex?
>> > >>>>
>> > >>>> And is any MMIO we can reuse (more complexer?) or we should allocate this
>> > >>>> MMIO page (the old question - where to allocated?)?
>> > >>> Maybe you could reuse/extend memhotplug IO interface,
>> > >>> or alternatively as Michael suggested add a vendor specific PCI_Config,
>> > >>> I'd suggest PM device for that (hw/acpi/[piix4.c|ihc9.c])
>> > >>> which I like even better since you won't need to care about which ports
>> > >>> to allocate at all.
>> > >>
>> > >> Well, if Michael does not object, i will do it in the next version. :)
>> > >
>> > > Sorry, the thread's so long by now that I'm no longer sure what does "it" refer to.
>> >
>> > Never mind i saw you were busy on other loops.
>> >
>> > "It" means the suggestion of Igor that "map each label area right after each
>> > NVDIMM's data memory"
>> Michael pointed out that putting label right after each NVDIMM
>> might burn up to 256GB of address space due to DIMM's alignment for 256 NVDIMMs.
>> However if address for each label is picked with pc_dimm_get_free_addr()
>> and label's MemoryRegion alignment is default 2MB then all labels
>> would be allocated close to each other within a single 1GB range.
>>
>> That would burn only 1GB for 500 labels which is more than possible 256 NVDIMMs.
>
> I thought about it, once we support hotplug, this means that one will
> have to pre-declare how much is needed so QEMU can mark the correct
> memory reserved, that would be nasty. Maybe we always pre-reserve 1Gbyte.
> Okay but next time we need something, do we steal another Gigabyte?
> It seems too much, I'll think it over on the weekend.
>
> Really, most other devices manage to get by with 4K chunks just fine, I
> don't see why do we are so special and need to steal gigabytes of
> physically contigious phy ranges.

What's the driving use case for labels in the guest?  For example,
NVDIMM-N devices are supported by the kernel without labels.

I certainly would not want to sacrifice 1GB alignment for a label area.

WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>,
	ehabkost@redhat.com, KVM list <kvm@vger.kernel.org>,
	Gleb Natapov <gleb@kernel.org>,
	mtosatti@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com,
	Paolo Bonzini <pbonzini@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	rth@twiddle.net
Subject: Re: [Qemu-devel] [PATCH v2 06/11] nvdimm acpi: initialize the resource used by NVDIMM ACPI
Date: Fri, 19 Feb 2016 00:43:16 -0800	[thread overview]
Message-ID: <CAPcyv4gkQq7w9wH12iwxzwZ6cbaKh2viHvwpaHUKdbBugiXiPg@mail.gmail.com> (raw)
In-Reply-To: <20160219100211-mutt-send-email-mst@redhat.com>

On Fri, Feb 19, 2016 at 12:08 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Thu, Feb 18, 2016 at 11:05:23AM +0100, Igor Mammedov wrote:
>> On Thu, 18 Feb 2016 12:03:36 +0800
>> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>>
>> > On 02/18/2016 01:26 AM, Michael S. Tsirkin wrote:
>> > > On Wed, Feb 17, 2016 at 10:04:18AM +0800, Xiao Guangrong wrote:
>> > >>>>> As for the rest could that commands go via MMIO that we usually
>> > >>>>> use for control path?
>> > >>>>
>> > >>>> So both input data and output data go through single MMIO, we need to
>> > >>>> introduce a protocol to pass these data, that is complex?
>> > >>>>
>> > >>>> And is any MMIO we can reuse (more complexer?) or we should allocate this
>> > >>>> MMIO page (the old question - where to allocated?)?
>> > >>> Maybe you could reuse/extend memhotplug IO interface,
>> > >>> or alternatively as Michael suggested add a vendor specific PCI_Config,
>> > >>> I'd suggest PM device for that (hw/acpi/[piix4.c|ihc9.c])
>> > >>> which I like even better since you won't need to care about which ports
>> > >>> to allocate at all.
>> > >>
>> > >> Well, if Michael does not object, i will do it in the next version. :)
>> > >
>> > > Sorry, the thread's so long by now that I'm no longer sure what does "it" refer to.
>> >
>> > Never mind i saw you were busy on other loops.
>> >
>> > "It" means the suggestion of Igor that "map each label area right after each
>> > NVDIMM's data memory"
>> Michael pointed out that putting label right after each NVDIMM
>> might burn up to 256GB of address space due to DIMM's alignment for 256 NVDIMMs.
>> However if address for each label is picked with pc_dimm_get_free_addr()
>> and label's MemoryRegion alignment is default 2MB then all labels
>> would be allocated close to each other within a single 1GB range.
>>
>> That would burn only 1GB for 500 labels which is more than possible 256 NVDIMMs.
>
> I thought about it, once we support hotplug, this means that one will
> have to pre-declare how much is needed so QEMU can mark the correct
> memory reserved, that would be nasty. Maybe we always pre-reserve 1Gbyte.
> Okay but next time we need something, do we steal another Gigabyte?
> It seems too much, I'll think it over on the weekend.
>
> Really, most other devices manage to get by with 4K chunks just fine, I
> don't see why do we are so special and need to steal gigabytes of
> physically contigious phy ranges.

What's the driving use case for labels in the guest?  For example,
NVDIMM-N devices are supported by the kernel without labels.

I certainly would not want to sacrifice 1GB alignment for a label area.

  reply	other threads:[~2016-02-19  8:43 UTC|newest]

Thread overview: 92+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-12 18:49 [PATCH v2 00/11] NVDIMM ACPI: introduce the framework of QEMU emulated Xiao Guangrong
2016-01-12 18:49 ` [Qemu-devel] " Xiao Guangrong
2016-01-12 18:50 ` [PATCH v2 01/11] tests: acpi: test multiple SSDT tables Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-01-12 18:50 ` [PATCH v2 02/11] tests: acpi: test NVDIMM tables Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-02-04 16:20   ` Michael S. Tsirkin
2016-02-04 16:20     ` [Qemu-devel] " Michael S. Tsirkin
2016-02-14  5:36     ` Xiao Guangrong
2016-02-14  5:36       ` [Qemu-devel] " Xiao Guangrong
2016-01-12 18:50 ` [PATCH v2 03/11] acpi: add aml_create_field() Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-02-08 10:47   ` Igor Mammedov
2016-02-08 10:47     ` [Qemu-devel] " Igor Mammedov
2016-02-14  5:41     ` Xiao Guangrong
2016-02-14  5:41       ` [Qemu-devel] " Xiao Guangrong
2016-01-12 18:50 ` [PATCH v2 04/11] acpi: add aml_concatenate() Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-02-08 10:51   ` Igor Mammedov
2016-02-08 10:51     ` [Qemu-devel] " Igor Mammedov
2016-02-14  5:52     ` Xiao Guangrong
2016-02-14  5:52       ` [Qemu-devel] " Xiao Guangrong
2016-02-14  5:55       ` Xiao Guangrong
2016-02-14  5:55         ` [Qemu-devel] " Xiao Guangrong
2016-02-15  9:02         ` Igor Mammedov
2016-02-15  9:02           ` [Qemu-devel] " Igor Mammedov
2016-02-15 10:32           ` Xiao Guangrong
2016-02-15 10:32             ` [Qemu-devel] " Xiao Guangrong
2016-01-12 18:50 ` [PATCH v2 05/11] acpi: allow using object as offset for OperationRegion Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-02-08 10:57   ` Igor Mammedov
2016-02-08 10:57     ` [Qemu-devel] " Igor Mammedov
2016-01-12 18:50 ` [PATCH v2 06/11] nvdimm acpi: initialize the resource used by NVDIMM ACPI Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-02-04 16:22   ` Michael S. Tsirkin
2016-02-04 16:22     ` [Qemu-devel] " Michael S. Tsirkin
2016-02-08 11:03   ` Igor Mammedov
2016-02-08 11:03     ` [Qemu-devel] " Igor Mammedov
2016-02-14  5:57     ` Xiao Guangrong
2016-02-14  5:57       ` [Qemu-devel] " Xiao Guangrong
2016-02-15  9:11       ` Igor Mammedov
2016-02-15  9:18         ` Michael S. Tsirkin
2016-02-15 10:13           ` Xiao Guangrong
2016-02-15 10:30             ` Michael S. Tsirkin
2016-02-15 10:30               ` Michael S. Tsirkin
2016-02-15 10:47             ` Igor Mammedov
2016-02-15 10:47               ` Igor Mammedov
2016-02-15 11:22               ` Xiao Guangrong
2016-02-15 11:22                 ` Xiao Guangrong
2016-02-15 11:45               ` Michael S. Tsirkin
2016-02-15 13:32                 ` Igor Mammedov
2016-02-15 15:53                   ` Xiao Guangrong
2016-02-15 17:24                     ` Igor Mammedov
2016-02-15 17:24                       ` Igor Mammedov
2016-02-15 18:35                       ` Xiao Guangrong
2016-02-15 18:35                         ` Xiao Guangrong
2016-02-16 11:00                         ` Igor Mammedov
2016-02-17  2:04                           ` Xiao Guangrong
2016-02-17 17:26                             ` Michael S. Tsirkin
2016-02-17 17:26                               ` Michael S. Tsirkin
2016-02-18  4:03                               ` Xiao Guangrong
2016-02-18  4:03                                 ` Xiao Guangrong
2016-02-18 10:05                                 ` Igor Mammedov
2016-02-18 10:05                                   ` [Qemu-devel] " Igor Mammedov
2016-02-19  8:08                                   ` Michael S. Tsirkin
2016-02-19  8:43                                     ` Dan Williams [this message]
2016-02-19  8:43                                       ` Dan Williams
2016-02-22 10:30                                       ` Xiao Guangrong
2016-02-22 10:30                                         ` Xiao Guangrong
2016-02-22 10:34                                     ` Xiao Guangrong
2016-02-18 10:20                                 ` Michael S. Tsirkin
2016-02-18 10:20                                   ` Michael S. Tsirkin
2016-02-15 11:36             ` Michael S. Tsirkin
2016-02-15 11:36               ` Michael S. Tsirkin
2016-01-12 18:50 ` [PATCH v2 07/11] nvdimm acpi: introduce patched dsm memory Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-01-12 18:50 ` [PATCH v2 08/11] nvdimm acpi: let qemu handle _DSM method Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-01-12 18:50 ` [PATCH v2 09/11] nvdimm acpi: emulate dsm method Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-01-12 18:50 ` [PATCH v2 10/11] nvdimm acpi: add _CRS Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-01-12 18:50 ` [PATCH v2 11/11] tests: acpi: update nvdimm ssdt table Xiao Guangrong
2016-01-12 18:50   ` [Qemu-devel] " Xiao Guangrong
2016-01-20  2:21 ` [PATCH v2 00/11] NVDIMM ACPI: introduce the framework of QEMU emulated Xiao Guangrong
2016-01-20  2:21   ` [Qemu-devel] " Xiao Guangrong
2016-01-28  4:42   ` Xiao Guangrong
2016-01-28  4:42     ` [Qemu-devel] " Xiao Guangrong
2016-02-04 16:24 ` Michael S. Tsirkin
2016-02-04 16:24   ` [Qemu-devel] " Michael S. Tsirkin
2016-02-14  5:38   ` Xiao Guangrong
2016-02-14  5:38     ` Xiao Guangrong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPcyv4gkQq7w9wH12iwxzwZ6cbaKh2viHvwpaHUKdbBugiXiPg@mail.gmail.com \
    --to=dan.j.williams@intel.com \
    --cc=ehabkost@redhat.com \
    --cc=gleb@kernel.org \
    --cc=guangrong.xiao@linux.intel.com \
    --cc=imammedo@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.