All of lore.kernel.org
 help / color / mirror / Atom feed
* Questions about vNVDIMM on qemu/KVM
@ 2018-05-23  5:08 ` Yasunori Goto
  0 siblings, 0 replies; 16+ messages in thread
From: Yasunori Goto @ 2018-05-23  5:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: NVDIMM-ML

Hello,

I'm investigating status of vNVDIMM on qemu/KVM,
and I have some questions about it. I'm glad if anyone answer them.

In my understanding, qemu/KVM has a feature to show NFIT for guest,
and it will be still updated about platform capability with this patch set.
https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html

And libvirt also supports this feature with <memory model='nvdimm'>
https://libvirt.org/formatdomain.html#elementsMemory


However, virtio-pmem is developing now, and it is better
for archtectures to detect regions of NVDIMM without ACPI (like s390x)
In addition, It is also necessary to flush guest contents on vNVDIMM
who has a backend-file. 


Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem? 
    How do each roles become it if both NFIT and virtio-pmem will be available?
    If my understanding is correct, both NFIT and virtio-pmem is used to
    detect vNVDIMM regions, but only one seems to be necessary....

    Otherwize, is the NFIT bus just for keeping compatibility, 
    and virtio-pmem is promising way?

    
Q2) What bus is(will be?) created for virtio-pmem?
    I could confirm the bus of NFIT is created with <memory model='nvdimm'>,
    and I heard other bus will be created for virtio-pmem, but I could not
    find what bus is created concretely.
    ---
      # ndctl list -B
      {
         "provider":"ACPI.NFIT",
         "dev":"ndbus0"
      }
    ---
   
    I think it affects what operations user will be able to, and what 
    notification is necessary for vNVDIMM. 
    ACPI defines some operations like namespace controll, and notification
    for NVDIMM health status or others.
    (I suppose that other status notification might be necessary for vNVDIMM,
     but I'm not sure yet...)

If my understanding is wrong, please correct me.

Thanks,
---
Yasunori Goto


_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Qemu-devel] Questions about vNVDIMM on qemu/KVM
@ 2018-05-23  5:08 ` Yasunori Goto
  0 siblings, 0 replies; 16+ messages in thread
From: Yasunori Goto @ 2018-05-23  5:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: NVDIMM-ML

Hello,

I'm investigating status of vNVDIMM on qemu/KVM,
and I have some questions about it. I'm glad if anyone answer them.

In my understanding, qemu/KVM has a feature to show NFIT for guest,
and it will be still updated about platform capability with this patch set.
https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html

And libvirt also supports this feature with <memory model='nvdimm'>
https://libvirt.org/formatdomain.html#elementsMemory


However, virtio-pmem is developing now, and it is better
for archtectures to detect regions of NVDIMM without ACPI (like s390x)
In addition, It is also necessary to flush guest contents on vNVDIMM
who has a backend-file. 


Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem? 
    How do each roles become it if both NFIT and virtio-pmem will be available?
    If my understanding is correct, both NFIT and virtio-pmem is used to
    detect vNVDIMM regions, but only one seems to be necessary....

    Otherwize, is the NFIT bus just for keeping compatibility, 
    and virtio-pmem is promising way?

    
Q2) What bus is(will be?) created for virtio-pmem?
    I could confirm the bus of NFIT is created with <memory model='nvdimm'>,
    and I heard other bus will be created for virtio-pmem, but I could not
    find what bus is created concretely.
    ---
      # ndctl list -B
      {
         "provider":"ACPI.NFIT",
         "dev":"ndbus0"
      }
    ---
   
    I think it affects what operations user will be able to, and what 
    notification is necessary for vNVDIMM. 
    ACPI defines some operations like namespace controll, and notification
    for NVDIMM health status or others.
    (I suppose that other status notification might be necessary for vNVDIMM,
     but I'm not sure yet...)

If my understanding is wrong, please correct me.

Thanks,
---
Yasunori Goto

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Questions about vNVDIMM on qemu/KVM
  2018-05-23  5:08 ` [Qemu-devel] " Yasunori Goto
@ 2018-05-23 18:39   ` Dan Williams
  -1 siblings, 0 replies; 16+ messages in thread
From: Dan Williams @ 2018-05-23 18:39 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Qemu Developers, NVDIMM-ML

On Tue, May 22, 2018 at 10:08 PM, Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
> Hello,
>
> I'm investigating status of vNVDIMM on qemu/KVM,
> and I have some questions about it. I'm glad if anyone answer them.
>
> In my understanding, qemu/KVM has a feature to show NFIT for guest,
> and it will be still updated about platform capability with this patch set.
> https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
>
> And libvirt also supports this feature with <memory model='nvdimm'>
> https://libvirt.org/formatdomain.html#elementsMemory
>
>
> However, virtio-pmem is developing now, and it is better
> for archtectures to detect regions of NVDIMM without ACPI (like s390x)

I think you are confusing virtio-pmem (patches from Pankaj) and
virtio-mem (patches from David)? ...or I'm confused.

> In addition, It is also necessary to flush guest contents on vNVDIMM
> who has a backend-file.

virtio-pmem is a mechanism to use host page cache as pmem in a guest.
It does not support high performance memory applications because it
requires fsync/msync. I.e. it is not DAX it is the traditional mmap
I/O model, but moving page cache management to the host rather than
duplicating it in guests.

> Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
>     How do each roles become it if both NFIT and virtio-pmem will be available?
>     If my understanding is correct, both NFIT and virtio-pmem is used to
>     detect vNVDIMM regions, but only one seems to be necessary....

We need both because they are different. Guest DAX should not be using
virtio-pmem.

>     Otherwize, is the NFIT bus just for keeping compatibility,
>     and virtio-pmem is promising way?
>
>
> Q2) What bus is(will be?) created for virtio-pmem?
>     I could confirm the bus of NFIT is created with <memory model='nvdimm'>,
>     and I heard other bus will be created for virtio-pmem, but I could not
>     find what bus is created concretely.
>     ---
>       # ndctl list -B
>       {
>          "provider":"ACPI.NFIT",
>          "dev":"ndbus0"
>       }
>     ---
>
>     I think it affects what operations user will be able to, and what
>     notification is necessary for vNVDIMM.
>     ACPI defines some operations like namespace controll, and notification
>     for NVDIMM health status or others.
>     (I suppose that other status notification might be necessary for vNVDIMM,
>      but I'm not sure yet...)
>
> If my understanding is wrong, please correct me.

The current plan, per my understanding, is a virtio-pmem SPA UUID
added to the virtual NFIT so that the guest driver can load the pmem
driver but also hook up the virtio command ring for forwarding
WRITE_{FUA,FLUSH} commands as host fsync operations.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] Questions about vNVDIMM on qemu/KVM
@ 2018-05-23 18:39   ` Dan Williams
  0 siblings, 0 replies; 16+ messages in thread
From: Dan Williams @ 2018-05-23 18:39 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Qemu Developers, NVDIMM-ML, Pankaj Gupta

On Tue, May 22, 2018 at 10:08 PM, Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
> Hello,
>
> I'm investigating status of vNVDIMM on qemu/KVM,
> and I have some questions about it. I'm glad if anyone answer them.
>
> In my understanding, qemu/KVM has a feature to show NFIT for guest,
> and it will be still updated about platform capability with this patch set.
> https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
>
> And libvirt also supports this feature with <memory model='nvdimm'>
> https://libvirt.org/formatdomain.html#elementsMemory
>
>
> However, virtio-pmem is developing now, and it is better
> for archtectures to detect regions of NVDIMM without ACPI (like s390x)

I think you are confusing virtio-pmem (patches from Pankaj) and
virtio-mem (patches from David)? ...or I'm confused.

> In addition, It is also necessary to flush guest contents on vNVDIMM
> who has a backend-file.

virtio-pmem is a mechanism to use host page cache as pmem in a guest.
It does not support high performance memory applications because it
requires fsync/msync. I.e. it is not DAX it is the traditional mmap
I/O model, but moving page cache management to the host rather than
duplicating it in guests.

> Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
>     How do each roles become it if both NFIT and virtio-pmem will be available?
>     If my understanding is correct, both NFIT and virtio-pmem is used to
>     detect vNVDIMM regions, but only one seems to be necessary....

We need both because they are different. Guest DAX should not be using
virtio-pmem.

>     Otherwize, is the NFIT bus just for keeping compatibility,
>     and virtio-pmem is promising way?
>
>
> Q2) What bus is(will be?) created for virtio-pmem?
>     I could confirm the bus of NFIT is created with <memory model='nvdimm'>,
>     and I heard other bus will be created for virtio-pmem, but I could not
>     find what bus is created concretely.
>     ---
>       # ndctl list -B
>       {
>          "provider":"ACPI.NFIT",
>          "dev":"ndbus0"
>       }
>     ---
>
>     I think it affects what operations user will be able to, and what
>     notification is necessary for vNVDIMM.
>     ACPI defines some operations like namespace controll, and notification
>     for NVDIMM health status or others.
>     (I suppose that other status notification might be necessary for vNVDIMM,
>      but I'm not sure yet...)
>
> If my understanding is wrong, please correct me.

The current plan, per my understanding, is a virtio-pmem SPA UUID
added to the virtual NFIT so that the guest driver can load the pmem
driver but also hook up the virtio command ring for forwarding
WRITE_{FUA,FLUSH} commands as host fsync operations.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Questions about vNVDIMM on qemu/KVM
  2018-05-23 18:39   ` [Qemu-devel] " Dan Williams
@ 2018-05-24  7:19     ` Yasunori Goto
  -1 siblings, 0 replies; 16+ messages in thread
From: Yasunori Goto @ 2018-05-24  7:19 UTC (permalink / raw)
  To: Dan Williams; +Cc: Qemu Developers, NVDIMM-ML

> On Tue, May 22, 2018 at 10:08 PM, Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
> > Hello,
> >
> > I'm investigating status of vNVDIMM on qemu/KVM,
> > and I have some questions about it. I'm glad if anyone answer them.
> >
> > In my understanding, qemu/KVM has a feature to show NFIT for guest,
> > and it will be still updated about platform capability with this patch set.
> > https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
> >
> > And libvirt also supports this feature with <memory model='nvdimm'>
> > https://libvirt.org/formatdomain.html#elementsMemory
> >
> >
> > However, virtio-pmem is developing now, and it is better
> > for archtectures to detect regions of NVDIMM without ACPI (like s390x)
> 
> I think you are confusing virtio-pmem (patches from Pankaj) and
> virtio-mem (patches from David)? ...or I'm confused.

Probably, "I" am confusing.
So, your clarification is very helpful for me.


> 
> > In addition, It is also necessary to flush guest contents on vNVDIMM
> > who has a backend-file.
> 
> virtio-pmem is a mechanism to use host page cache as pmem in a guest.
> It does not support high performance memory applications because it
> requires fsync/msync. I.e. it is not DAX it is the traditional mmap
> I/O model, but moving page cache management to the host rather than
> duplicating it in guests.

Ah, ok.


> 
> > Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
> >     How do each roles become it if both NFIT and virtio-pmem will be available?
> >     If my understanding is correct, both NFIT and virtio-pmem is used to
> >     detect vNVDIMM regions, but only one seems to be necessary....
> 
> We need both because they are different. Guest DAX should not be using
> virtio-pmem.

Hmm. Ok.

But ,I would like understand one more thing.
In the following mail, it seems that e820 bus will be used for fake DAX.

https://lists.01.org/pipermail/linux-nvdimm/2018-January/013926.html

Could you tell me what is relationship between "fake DAX" in this mail
and Guest DAX? 
Why e820 is necessary for this case?

(Probably, it may be one of the reason why I'm confusing....)


> 
> >     Otherwize, is the NFIT bus just for keeping compatibility,
> >     and virtio-pmem is promising way?
> >
> >
> > Q2) What bus is(will be?) created for virtio-pmem?
> >     I could confirm the bus of NFIT is created with <memory model='nvdimm'>,
> >     and I heard other bus will be created for virtio-pmem, but I could not
> >     find what bus is created concretely.
> >     ---
> >       # ndctl list -B
> >       {
> >          "provider":"ACPI.NFIT",
> >          "dev":"ndbus0"
> >       }
> >     ---
> >
> >     I think it affects what operations user will be able to, and what
> >     notification is necessary for vNVDIMM.
> >     ACPI defines some operations like namespace controll, and notification
> >     for NVDIMM health status or others.
> >     (I suppose that other status notification might be necessary for vNVDIMM,
> >      but I'm not sure yet...)
> >
> > If my understanding is wrong, please correct me.
> 
> The current plan, per my understanding, is a virtio-pmem SPA UUID
> added to the virtual NFIT so that the guest driver can load the pmem
> driver but also hook up the virtio command ring for forwarding
> WRITE_{FUA,FLUSH} commands as host fsync operations.

Ok.

Thank you very much for your answer!

---
Yasunori Goto



_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] Questions about vNVDIMM on qemu/KVM
@ 2018-05-24  7:19     ` Yasunori Goto
  0 siblings, 0 replies; 16+ messages in thread
From: Yasunori Goto @ 2018-05-24  7:19 UTC (permalink / raw)
  To: Dan Williams; +Cc: Qemu Developers, NVDIMM-ML

> On Tue, May 22, 2018 at 10:08 PM, Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
> > Hello,
> >
> > I'm investigating status of vNVDIMM on qemu/KVM,
> > and I have some questions about it. I'm glad if anyone answer them.
> >
> > In my understanding, qemu/KVM has a feature to show NFIT for guest,
> > and it will be still updated about platform capability with this patch set.
> > https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
> >
> > And libvirt also supports this feature with <memory model='nvdimm'>
> > https://libvirt.org/formatdomain.html#elementsMemory
> >
> >
> > However, virtio-pmem is developing now, and it is better
> > for archtectures to detect regions of NVDIMM without ACPI (like s390x)
> 
> I think you are confusing virtio-pmem (patches from Pankaj) and
> virtio-mem (patches from David)? ...or I'm confused.

Probably, "I" am confusing.
So, your clarification is very helpful for me.


> 
> > In addition, It is also necessary to flush guest contents on vNVDIMM
> > who has a backend-file.
> 
> virtio-pmem is a mechanism to use host page cache as pmem in a guest.
> It does not support high performance memory applications because it
> requires fsync/msync. I.e. it is not DAX it is the traditional mmap
> I/O model, but moving page cache management to the host rather than
> duplicating it in guests.

Ah, ok.


> 
> > Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
> >     How do each roles become it if both NFIT and virtio-pmem will be available?
> >     If my understanding is correct, both NFIT and virtio-pmem is used to
> >     detect vNVDIMM regions, but only one seems to be necessary....
> 
> We need both because they are different. Guest DAX should not be using
> virtio-pmem.

Hmm. Ok.

But ,I would like understand one more thing.
In the following mail, it seems that e820 bus will be used for fake DAX.

https://lists.01.org/pipermail/linux-nvdimm/2018-January/013926.html

Could you tell me what is relationship between "fake DAX" in this mail
and Guest DAX? 
Why e820 is necessary for this case?

(Probably, it may be one of the reason why I'm confusing....)


> 
> >     Otherwize, is the NFIT bus just for keeping compatibility,
> >     and virtio-pmem is promising way?
> >
> >
> > Q2) What bus is(will be?) created for virtio-pmem?
> >     I could confirm the bus of NFIT is created with <memory model='nvdimm'>,
> >     and I heard other bus will be created for virtio-pmem, but I could not
> >     find what bus is created concretely.
> >     ---
> >       # ndctl list -B
> >       {
> >          "provider":"ACPI.NFIT",
> >          "dev":"ndbus0"
> >       }
> >     ---
> >
> >     I think it affects what operations user will be able to, and what
> >     notification is necessary for vNVDIMM.
> >     ACPI defines some operations like namespace controll, and notification
> >     for NVDIMM health status or others.
> >     (I suppose that other status notification might be necessary for vNVDIMM,
> >      but I'm not sure yet...)
> >
> > If my understanding is wrong, please correct me.
> 
> The current plan, per my understanding, is a virtio-pmem SPA UUID
> added to the virtual NFIT so that the guest driver can load the pmem
> driver but also hook up the virtio command ring for forwarding
> WRITE_{FUA,FLUSH} commands as host fsync operations.

Ok.

Thank you very much for your answer!

---
Yasunori Goto

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Questions about vNVDIMM on qemu/KVM
  2018-05-24  7:19     ` [Qemu-devel] " Yasunori Goto
@ 2018-05-24 14:08       ` Dan Williams
  -1 siblings, 0 replies; 16+ messages in thread
From: Dan Williams @ 2018-05-24 14:08 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Qemu Developers, NVDIMM-ML

On Thu, May 24, 2018 at 12:19 AM, Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
>> On Tue, May 22, 2018 at 10:08 PM, Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
>> > Hello,
>> >
>> > I'm investigating status of vNVDIMM on qemu/KVM,
>> > and I have some questions about it. I'm glad if anyone answer them.
>> >
>> > In my understanding, qemu/KVM has a feature to show NFIT for guest,
>> > and it will be still updated about platform capability with this patch set.
>> > https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
>> >
>> > And libvirt also supports this feature with <memory model='nvdimm'>
>> > https://libvirt.org/formatdomain.html#elementsMemory
>> >
>> >
>> > However, virtio-pmem is developing now, and it is better
>> > for archtectures to detect regions of NVDIMM without ACPI (like s390x)
>>
>> I think you are confusing virtio-pmem (patches from Pankaj) and
>> virtio-mem (patches from David)? ...or I'm confused.
>
> Probably, "I" am confusing.
> So, your clarification is very helpful for me.
>
>
>>
>> > In addition, It is also necessary to flush guest contents on vNVDIMM
>> > who has a backend-file.
>>
>> virtio-pmem is a mechanism to use host page cache as pmem in a guest.
>> It does not support high performance memory applications because it
>> requires fsync/msync. I.e. it is not DAX it is the traditional mmap
>> I/O model, but moving page cache management to the host rather than
>> duplicating it in guests.
>
> Ah, ok.
>
>
>>
>> > Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
>> >     How do each roles become it if both NFIT and virtio-pmem will be available?
>> >     If my understanding is correct, both NFIT and virtio-pmem is used to
>> >     detect vNVDIMM regions, but only one seems to be necessary....
>>
>> We need both because they are different. Guest DAX should not be using
>> virtio-pmem.
>
> Hmm. Ok.
>
> But ,I would like understand one more thing.
> In the following mail, it seems that e820 bus will be used for fake DAX.
>
> https://lists.01.org/pipermail/linux-nvdimm/2018-January/013926.html
>
> Could you tell me what is relationship between "fake DAX" in this mail
> and Guest DAX?
> Why e820 is necessary for this case?
>

It was proposed as a starting template for writing a new nvdimm bus
driver. All we need is a way to communicate both the address range and
the flush interface. This could be done with a new SPA Range GUID with
the NFIT, or a custom virtio-pci device that registers a special
nvdimm region with this property. My preference is whichever approach
minimizes the code duplication, because the pmem driver should be
re-used as much as possible.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] Questions about vNVDIMM on qemu/KVM
@ 2018-05-24 14:08       ` Dan Williams
  0 siblings, 0 replies; 16+ messages in thread
From: Dan Williams @ 2018-05-24 14:08 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Qemu Developers, NVDIMM-ML

On Thu, May 24, 2018 at 12:19 AM, Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
>> On Tue, May 22, 2018 at 10:08 PM, Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
>> > Hello,
>> >
>> > I'm investigating status of vNVDIMM on qemu/KVM,
>> > and I have some questions about it. I'm glad if anyone answer them.
>> >
>> > In my understanding, qemu/KVM has a feature to show NFIT for guest,
>> > and it will be still updated about platform capability with this patch set.
>> > https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
>> >
>> > And libvirt also supports this feature with <memory model='nvdimm'>
>> > https://libvirt.org/formatdomain.html#elementsMemory
>> >
>> >
>> > However, virtio-pmem is developing now, and it is better
>> > for archtectures to detect regions of NVDIMM without ACPI (like s390x)
>>
>> I think you are confusing virtio-pmem (patches from Pankaj) and
>> virtio-mem (patches from David)? ...or I'm confused.
>
> Probably, "I" am confusing.
> So, your clarification is very helpful for me.
>
>
>>
>> > In addition, It is also necessary to flush guest contents on vNVDIMM
>> > who has a backend-file.
>>
>> virtio-pmem is a mechanism to use host page cache as pmem in a guest.
>> It does not support high performance memory applications because it
>> requires fsync/msync. I.e. it is not DAX it is the traditional mmap
>> I/O model, but moving page cache management to the host rather than
>> duplicating it in guests.
>
> Ah, ok.
>
>
>>
>> > Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
>> >     How do each roles become it if both NFIT and virtio-pmem will be available?
>> >     If my understanding is correct, both NFIT and virtio-pmem is used to
>> >     detect vNVDIMM regions, but only one seems to be necessary....
>>
>> We need both because they are different. Guest DAX should not be using
>> virtio-pmem.
>
> Hmm. Ok.
>
> But ,I would like understand one more thing.
> In the following mail, it seems that e820 bus will be used for fake DAX.
>
> https://lists.01.org/pipermail/linux-nvdimm/2018-January/013926.html
>
> Could you tell me what is relationship between "fake DAX" in this mail
> and Guest DAX?
> Why e820 is necessary for this case?
>

It was proposed as a starting template for writing a new nvdimm bus
driver. All we need is a way to communicate both the address range and
the flush interface. This could be done with a new SPA Range GUID with
the NFIT, or a custom virtio-pci device that registers a special
nvdimm region with this property. My preference is whichever approach
minimizes the code duplication, because the pmem driver should be
re-used as much as possible.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Questions about vNVDIMM on qemu/KVM
  2018-05-24 14:08       ` [Qemu-devel] " Dan Williams
@ 2018-05-25  5:02         ` Yasunori Goto
  -1 siblings, 0 replies; 16+ messages in thread
From: Yasunori Goto @ 2018-05-25  5:02 UTC (permalink / raw)
  To: Dan Williams; +Cc: Qemu Developers, NVDIMM-ML


> >
> > But ,I would like understand one more thing.
> > In the following mail, it seems that e820 bus will be used for fake DAX.
> >
> > https://lists.01.org/pipermail/linux-nvdimm/2018-January/013926.html
> >
> > Could you tell me what is relationship between "fake DAX" in this mail
> > and Guest DAX?
> > Why e820 is necessary for this case?
> >
> 
> It was proposed as a starting template for writing a new nvdimm bus
> driver. All we need is a way to communicate both the address range and
> the flush interface. This could be done with a new SPA Range GUID with
> the NFIT, or a custom virtio-pci device that registers a special
> nvdimm region with this property. My preference is whichever approach
> minimizes the code duplication, because the pmem driver should be
> re-used as much as possible.

Ok, I see.
Thank you very much for your explanation.

Bye,
---
Yasunori Goto


_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] Questions about vNVDIMM on qemu/KVM
@ 2018-05-25  5:02         ` Yasunori Goto
  0 siblings, 0 replies; 16+ messages in thread
From: Yasunori Goto @ 2018-05-25  5:02 UTC (permalink / raw)
  To: Dan Williams; +Cc: Qemu Developers, NVDIMM-ML


> >
> > But ,I would like understand one more thing.
> > In the following mail, it seems that e820 bus will be used for fake DAX.
> >
> > https://lists.01.org/pipermail/linux-nvdimm/2018-January/013926.html
> >
> > Could you tell me what is relationship between "fake DAX" in this mail
> > and Guest DAX?
> > Why e820 is necessary for this case?
> >
> 
> It was proposed as a starting template for writing a new nvdimm bus
> driver. All we need is a way to communicate both the address range and
> the flush interface. This could be done with a new SPA Range GUID with
> the NFIT, or a custom virtio-pci device that registers a special
> nvdimm region with this property. My preference is whichever approach
> minimizes the code duplication, because the pmem driver should be
> re-used as much as possible.

Ok, I see.
Thank you very much for your explanation.

Bye,
---
Yasunori Goto

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Questions about vNVDIMM on qemu/KVM
  2018-05-23  5:08 ` [Qemu-devel] " Yasunori Goto
@ 2018-06-01 11:54   ` Stefan Hajnoczi
  -1 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2018-06-01 11:54 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Pankaj Gupta, Luiz Capitulino, qemu-devel, NVDIMM-ML

[-- Attachment #1: Type: text/plain, Size: 2190 bytes --]

On Wed, May 23, 2018 at 02:08:12PM +0900, Yasunori Goto wrote:
> Hello,

CCing Pankaj, who is developing virtio-pmem and may have comments beyond
what has already been discussed.

> I'm investigating status of vNVDIMM on qemu/KVM,
> and I have some questions about it. I'm glad if anyone answer them.
> 
> In my understanding, qemu/KVM has a feature to show NFIT for guest,
> and it will be still updated about platform capability with this patch set.
> https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
> 
> And libvirt also supports this feature with <memory model='nvdimm'>
> https://libvirt.org/formatdomain.html#elementsMemory
> 
> 
> However, virtio-pmem is developing now, and it is better
> for archtectures to detect regions of NVDIMM without ACPI (like s390x)
> In addition, It is also necessary to flush guest contents on vNVDIMM
> who has a backend-file. 
> 
> 
> Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem? 
>     How do each roles become it if both NFIT and virtio-pmem will be available?
>     If my understanding is correct, both NFIT and virtio-pmem is used to
>     detect vNVDIMM regions, but only one seems to be necessary....
> 
>     Otherwize, is the NFIT bus just for keeping compatibility, 
>     and virtio-pmem is promising way?
> 
>     
> Q2) What bus is(will be?) created for virtio-pmem?
>     I could confirm the bus of NFIT is created with <memory model='nvdimm'>,
>     and I heard other bus will be created for virtio-pmem, but I could not
>     find what bus is created concretely.
>     ---
>       # ndctl list -B
>       {
>          "provider":"ACPI.NFIT",
>          "dev":"ndbus0"
>       }
>     ---
>    
>     I think it affects what operations user will be able to, and what 
>     notification is necessary for vNVDIMM. 
>     ACPI defines some operations like namespace controll, and notification
>     for NVDIMM health status or others.
>     (I suppose that other status notification might be necessary for vNVDIMM,
>      but I'm not sure yet...)
> 
> If my understanding is wrong, please correct me.
> 
> Thanks,
> ---
> Yasunori Goto
> 
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] Questions about vNVDIMM on qemu/KVM
@ 2018-06-01 11:54   ` Stefan Hajnoczi
  0 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2018-06-01 11:54 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: qemu-devel, NVDIMM-ML, Pankaj Gupta, Luiz Capitulino

[-- Attachment #1: Type: text/plain, Size: 2190 bytes --]

On Wed, May 23, 2018 at 02:08:12PM +0900, Yasunori Goto wrote:
> Hello,

CCing Pankaj, who is developing virtio-pmem and may have comments beyond
what has already been discussed.

> I'm investigating status of vNVDIMM on qemu/KVM,
> and I have some questions about it. I'm glad if anyone answer them.
> 
> In my understanding, qemu/KVM has a feature to show NFIT for guest,
> and it will be still updated about platform capability with this patch set.
> https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
> 
> And libvirt also supports this feature with <memory model='nvdimm'>
> https://libvirt.org/formatdomain.html#elementsMemory
> 
> 
> However, virtio-pmem is developing now, and it is better
> for archtectures to detect regions of NVDIMM without ACPI (like s390x)
> In addition, It is also necessary to flush guest contents on vNVDIMM
> who has a backend-file. 
> 
> 
> Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem? 
>     How do each roles become it if both NFIT and virtio-pmem will be available?
>     If my understanding is correct, both NFIT and virtio-pmem is used to
>     detect vNVDIMM regions, but only one seems to be necessary....
> 
>     Otherwize, is the NFIT bus just for keeping compatibility, 
>     and virtio-pmem is promising way?
> 
>     
> Q2) What bus is(will be?) created for virtio-pmem?
>     I could confirm the bus of NFIT is created with <memory model='nvdimm'>,
>     and I heard other bus will be created for virtio-pmem, but I could not
>     find what bus is created concretely.
>     ---
>       # ndctl list -B
>       {
>          "provider":"ACPI.NFIT",
>          "dev":"ndbus0"
>       }
>     ---
>    
>     I think it affects what operations user will be able to, and what 
>     notification is necessary for vNVDIMM. 
>     ACPI defines some operations like namespace controll, and notification
>     for NVDIMM health status or others.
>     (I suppose that other status notification might be necessary for vNVDIMM,
>      but I'm not sure yet...)
> 
> If my understanding is wrong, please correct me.
> 
> Thanks,
> ---
> Yasunori Goto
> 
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] Questions about vNVDIMM on qemu/KVM
  2018-06-01 11:54   ` [Qemu-devel] " Stefan Hajnoczi
@ 2018-06-04  8:03     ` Pankaj Gupta
  -1 siblings, 0 replies; 16+ messages in thread
From: Pankaj Gupta @ 2018-06-04  8:03 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Stefan Hajnoczi, NVDIMM-ML, qemu-devel, Luiz Capitulino

Hi,
 
> 
> > I'm investigating status of vNVDIMM on qemu/KVM,
> > and I have some questions about it. I'm glad if anyone answer them.
> > 
> > In my understanding, qemu/KVM has a feature to show NFIT for guest,
> > and it will be still updated about platform capability with this patch set.
> > https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
> > 
> > And libvirt also supports this feature with <memory model='nvdimm'>
> > https://libvirt.org/formatdomain.html#elementsMemory
> > 
> > 
> > However, virtio-pmem is developing now, and it is better
> > for archtectures to detect regions of NVDIMM without ACPI (like s390x)
> > In addition, It is also necessary to flush guest contents on vNVDIMM
> > who has a backend-file.
> > 
> > 
> > Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
        No.

> >     How do each roles become it if both NFIT and virtio-pmem will be
> >     available?

        There are two main use cases:

        1] DAX memory region pass-through to guest:
        -------------------------------------------
        As this region is present in actual physical NVDIMM device and exposed to guest,
        ACPI/NFIT way is used. If all the persistent memory is used by only this way we 
        don't need 'virtio-pmem'.

        2] Emulated DAX memory region in host passed to guest:
        --------------------------------------------------------
        If this type of region is exposed to guest, it will be preferable to use
        'virtio-pmem'. 

        This is regular host memory which is mmaped in guest address space for emulating 
        persistent memory. Guest writes are present in host page cache and not assured to be 
        written on backing disk without an explicit flush/sync call.
 
        'virtio-pmem' will solve the problem of flushing guest writes present in host page cache.
        With filesystems at host which use journal-ling like (ext4, xfs), they automatically call 
        'fsync' at regular intervals. but still there is not 100% assurance of all write persistence until
        an explicit flush is done from guest. So, we need an additional fsync to flush guest writes to 
        backing disk. We are using this approach to avoid using guest page cache and keep page cache management 
        of all the guests at host side.
        
        If both ACPI NFIT and virtio-pmem are present, both will have their corresponding memory regions and 
        defined by memory type "Persistent shared Memory" in case of virtio-pmem and "Persistent Memory" for 
        ACPI NVDIMM. This is to differentiate both the memory types.

> >     If my understanding is correct, both NFIT and virtio-pmem is used to
> >     detect vNVDIMM regions, but only one seems to be necessary....
> > 
> >     Otherwize, is the NFIT bus just for keeping compatibility,
> >     and virtio-pmem is promising way?
> > 
> >     
> > Q2) What bus is(will be?) created for virtio-pmem?
> >     I could confirm the bus of NFIT is created with <memory
> >     model='nvdimm'>,

        For virtio-pmem also its nvdimm bus.

> >     and I heard other bus will be created for virtio-pmem, but I could not
> >     find what bus is created concretely.
> >     ---
> >       # ndctl list -B
> >       {
> >          "provider":"ACPI.NFIT",
> >          "dev":"ndbus0"
> >       }
> >     ---
> >    
> >     I think it affects what operations user will be able to, and what
> >     notification is necessary for vNVDIMM.
> >     ACPI defines some operations like namespace controll, and notification
> >     for NVDIMM health status or others.
> >     (I suppose that other status notification might be necessary for
> >     vNVDIMM,
> >      but I'm not sure yet...)

         For virtio-pmem, we are not providing advance features like namespace and various
         other features which ACPI/NVDIMM hardware provides. This is just to keep paravirt
         device simple.

         Moreover I have not yet looked at ndctl side of things. I am not 100% sure how
         ndctl will handle 'virtio-pmem'.

        Adding 'Dan' in loop, he can add his thoughts.

Thanks,
Pankaj

> > 
> > If my understanding is wrong, please correct me.
> > 
> > Thanks,
> > ---
> > Yasunori Goto
> > 
> > 
> > 
> 
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] Questions about vNVDIMM on qemu/KVM
@ 2018-06-04  8:03     ` Pankaj Gupta
  0 siblings, 0 replies; 16+ messages in thread
From: Pankaj Gupta @ 2018-06-04  8:03 UTC (permalink / raw)
  To: Yasunori Goto
  Cc: Luiz Capitulino, Stefan Hajnoczi, qemu-devel, NVDIMM-ML, dan j williams

Hi,
 
> 
> > I'm investigating status of vNVDIMM on qemu/KVM,
> > and I have some questions about it. I'm glad if anyone answer them.
> > 
> > In my understanding, qemu/KVM has a feature to show NFIT for guest,
> > and it will be still updated about platform capability with this patch set.
> > https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
> > 
> > And libvirt also supports this feature with <memory model='nvdimm'>
> > https://libvirt.org/formatdomain.html#elementsMemory
> > 
> > 
> > However, virtio-pmem is developing now, and it is better
> > for archtectures to detect regions of NVDIMM without ACPI (like s390x)
> > In addition, It is also necessary to flush guest contents on vNVDIMM
> > who has a backend-file.
> > 
> > 
> > Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
        No.

> >     How do each roles become it if both NFIT and virtio-pmem will be
> >     available?

        There are two main use cases:

        1] DAX memory region pass-through to guest:
        -------------------------------------------
        As this region is present in actual physical NVDIMM device and exposed to guest,
        ACPI/NFIT way is used. If all the persistent memory is used by only this way we 
        don't need 'virtio-pmem'.

        2] Emulated DAX memory region in host passed to guest:
        --------------------------------------------------------
        If this type of region is exposed to guest, it will be preferable to use
        'virtio-pmem'. 

        This is regular host memory which is mmaped in guest address space for emulating 
        persistent memory. Guest writes are present in host page cache and not assured to be 
        written on backing disk without an explicit flush/sync call.
 
        'virtio-pmem' will solve the problem of flushing guest writes present in host page cache.
        With filesystems at host which use journal-ling like (ext4, xfs), they automatically call 
        'fsync' at regular intervals. but still there is not 100% assurance of all write persistence until
        an explicit flush is done from guest. So, we need an additional fsync to flush guest writes to 
        backing disk. We are using this approach to avoid using guest page cache and keep page cache management 
        of all the guests at host side.
        
        If both ACPI NFIT and virtio-pmem are present, both will have their corresponding memory regions and 
        defined by memory type "Persistent shared Memory" in case of virtio-pmem and "Persistent Memory" for 
        ACPI NVDIMM. This is to differentiate both the memory types.

> >     If my understanding is correct, both NFIT and virtio-pmem is used to
> >     detect vNVDIMM regions, but only one seems to be necessary....
> > 
> >     Otherwize, is the NFIT bus just for keeping compatibility,
> >     and virtio-pmem is promising way?
> > 
> >     
> > Q2) What bus is(will be?) created for virtio-pmem?
> >     I could confirm the bus of NFIT is created with <memory
> >     model='nvdimm'>,

        For virtio-pmem also its nvdimm bus.

> >     and I heard other bus will be created for virtio-pmem, but I could not
> >     find what bus is created concretely.
> >     ---
> >       # ndctl list -B
> >       {
> >          "provider":"ACPI.NFIT",
> >          "dev":"ndbus0"
> >       }
> >     ---
> >    
> >     I think it affects what operations user will be able to, and what
> >     notification is necessary for vNVDIMM.
> >     ACPI defines some operations like namespace controll, and notification
> >     for NVDIMM health status or others.
> >     (I suppose that other status notification might be necessary for
> >     vNVDIMM,
> >      but I'm not sure yet...)

         For virtio-pmem, we are not providing advance features like namespace and various
         other features which ACPI/NVDIMM hardware provides. This is just to keep paravirt
         device simple.

         Moreover I have not yet looked at ndctl side of things. I am not 100% sure how
         ndctl will handle 'virtio-pmem'.

        Adding 'Dan' in loop, he can add his thoughts.

Thanks,
Pankaj

> > 
> > If my understanding is wrong, please correct me.
> > 
> > Thanks,
> > ---
> > Yasunori Goto
> > 
> > 
> > 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] Questions about vNVDIMM on qemu/KVM
  2018-06-04  8:03     ` Pankaj Gupta
@ 2018-06-06  1:44       ` Yasunori Goto
  -1 siblings, 0 replies; 16+ messages in thread
From: Yasunori Goto @ 2018-06-06  1:44 UTC (permalink / raw)
  To: Pankaj Gupta; +Cc: Stefan Hajnoczi, NVDIMM-ML, qemu-devel, Luiz Capitulino

> Hi,
>  
> > 
> > > I'm investigating status of vNVDIMM on qemu/KVM,
> > > and I have some questions about it. I'm glad if anyone answer them.
> > > 
> > > In my understanding, qemu/KVM has a feature to show NFIT for guest,
> > > and it will be still updated about platform capability with this patch set.
> > > https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
> > > 
> > > And libvirt also supports this feature with <memory model='nvdimm'>
> > > https://libvirt.org/formatdomain.html#elementsMemory
> > > 
> > > 
> > > However, virtio-pmem is developing now, and it is better
> > > for archtectures to detect regions of NVDIMM without ACPI (like s390x)
> > > In addition, It is also necessary to flush guest contents on vNVDIMM
> > > who has a backend-file.
> > > 
> > > 
> > > Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
>         No.
> 
> > >     How do each roles become it if both NFIT and virtio-pmem will be
> > >     available?
> 
>         There are two main use cases:
> 
>         1] DAX memory region pass-through to guest:
>         -------------------------------------------
>         As this region is present in actual physical NVDIMM device and exposed to guest,
>         ACPI/NFIT way is used. If all the persistent memory is used by only this way we 
>         don't need 'virtio-pmem'.
> 
>         2] Emulated DAX memory region in host passed to guest:
>         --------------------------------------------------------
>         If this type of region is exposed to guest, it will be preferable to use
>         'virtio-pmem'. 
> 
>         This is regular host memory which is mmaped in guest address space for emulating 
>         persistent memory. Guest writes are present in host page cache and not assured to be 
>         written on backing disk without an explicit flush/sync call.
>  
>         'virtio-pmem' will solve the problem of flushing guest writes present in host page cache.
>         With filesystems at host which use journal-ling like (ext4, xfs), they automatically call 
>         'fsync' at regular intervals. but still there is not 100% assurance of all write persistence until
>         an explicit flush is done from guest. So, we need an additional fsync to flush guest writes to 
>         backing disk. We are using this approach to avoid using guest page cache and keep page cache management 
>         of all the guests at host side.
>         
>         If both ACPI NFIT and virtio-pmem are present, both will have their corresponding memory regions and 
>         defined by memory type "Persistent shared Memory" in case of virtio-pmem and "Persistent Memory" for 
>         ACPI NVDIMM. This is to differentiate both the memory types.

Ok.

> 
> > >     If my understanding is correct, both NFIT and virtio-pmem is used to
> > >     detect vNVDIMM regions, but only one seems to be necessary....
> > > 
> > >     Otherwize, is the NFIT bus just for keeping compatibility,
> > >     and virtio-pmem is promising way?
> > > 
> > >     
> > > Q2) What bus is(will be?) created for virtio-pmem?
> > >     I could confirm the bus of NFIT is created with <memory
> > >     model='nvdimm'>,
> 
>         For virtio-pmem also its nvdimm bus.
> 
> > >     and I heard other bus will be created for virtio-pmem, but I could not
> > >     find what bus is created concretely.
> > >     ---
> > >       # ndctl list -B
> > >       {
> > >          "provider":"ACPI.NFIT",
> > >          "dev":"ndbus0"
> > >       }
> > >     ---
> > >    
> > >     I think it affects what operations user will be able to, and what
> > >     notification is necessary for vNVDIMM.
> > >     ACPI defines some operations like namespace controll, and notification
> > >     for NVDIMM health status or others.
> > >     (I suppose that other status notification might be necessary for
> > >     vNVDIMM,
> > >      but I'm not sure yet...)
> 
>          For virtio-pmem, we are not providing advance features like namespace and various
>          other features which ACPI/NVDIMM hardware provides. This is just to keep paravirt
>          device simple.

Hmm, I see.
Thank you for your explanation.

Bye.
---
Yasunori Goto


> 
>          Moreover I have not yet looked at ndctl side of things. I am not 100% sure how
>          ndctl will handle 'virtio-pmem'.
> 
>         Adding 'Dan' in loop, he can add his thoughts.
> 
> Thanks,
> Pankaj
> 


_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] Questions about vNVDIMM on qemu/KVM
@ 2018-06-06  1:44       ` Yasunori Goto
  0 siblings, 0 replies; 16+ messages in thread
From: Yasunori Goto @ 2018-06-06  1:44 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: Luiz Capitulino, Stefan Hajnoczi, qemu-devel, NVDIMM-ML, dan j williams

> Hi,
>  
> > 
> > > I'm investigating status of vNVDIMM on qemu/KVM,
> > > and I have some questions about it. I'm glad if anyone answer them.
> > > 
> > > In my understanding, qemu/KVM has a feature to show NFIT for guest,
> > > and it will be still updated about platform capability with this patch set.
> > > https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04756.html
> > > 
> > > And libvirt also supports this feature with <memory model='nvdimm'>
> > > https://libvirt.org/formatdomain.html#elementsMemory
> > > 
> > > 
> > > However, virtio-pmem is developing now, and it is better
> > > for archtectures to detect regions of NVDIMM without ACPI (like s390x)
> > > In addition, It is also necessary to flush guest contents on vNVDIMM
> > > who has a backend-file.
> > > 
> > > 
> > > Q1) Does ACPI.NFIT bus of qemu/kvm remain with virtio-pmem?
>         No.
> 
> > >     How do each roles become it if both NFIT and virtio-pmem will be
> > >     available?
> 
>         There are two main use cases:
> 
>         1] DAX memory region pass-through to guest:
>         -------------------------------------------
>         As this region is present in actual physical NVDIMM device and exposed to guest,
>         ACPI/NFIT way is used. If all the persistent memory is used by only this way we 
>         don't need 'virtio-pmem'.
> 
>         2] Emulated DAX memory region in host passed to guest:
>         --------------------------------------------------------
>         If this type of region is exposed to guest, it will be preferable to use
>         'virtio-pmem'. 
> 
>         This is regular host memory which is mmaped in guest address space for emulating 
>         persistent memory. Guest writes are present in host page cache and not assured to be 
>         written on backing disk without an explicit flush/sync call.
>  
>         'virtio-pmem' will solve the problem of flushing guest writes present in host page cache.
>         With filesystems at host which use journal-ling like (ext4, xfs), they automatically call 
>         'fsync' at regular intervals. but still there is not 100% assurance of all write persistence until
>         an explicit flush is done from guest. So, we need an additional fsync to flush guest writes to 
>         backing disk. We are using this approach to avoid using guest page cache and keep page cache management 
>         of all the guests at host side.
>         
>         If both ACPI NFIT and virtio-pmem are present, both will have their corresponding memory regions and 
>         defined by memory type "Persistent shared Memory" in case of virtio-pmem and "Persistent Memory" for 
>         ACPI NVDIMM. This is to differentiate both the memory types.

Ok.

> 
> > >     If my understanding is correct, both NFIT and virtio-pmem is used to
> > >     detect vNVDIMM regions, but only one seems to be necessary....
> > > 
> > >     Otherwize, is the NFIT bus just for keeping compatibility,
> > >     and virtio-pmem is promising way?
> > > 
> > >     
> > > Q2) What bus is(will be?) created for virtio-pmem?
> > >     I could confirm the bus of NFIT is created with <memory
> > >     model='nvdimm'>,
> 
>         For virtio-pmem also its nvdimm bus.
> 
> > >     and I heard other bus will be created for virtio-pmem, but I could not
> > >     find what bus is created concretely.
> > >     ---
> > >       # ndctl list -B
> > >       {
> > >          "provider":"ACPI.NFIT",
> > >          "dev":"ndbus0"
> > >       }
> > >     ---
> > >    
> > >     I think it affects what operations user will be able to, and what
> > >     notification is necessary for vNVDIMM.
> > >     ACPI defines some operations like namespace controll, and notification
> > >     for NVDIMM health status or others.
> > >     (I suppose that other status notification might be necessary for
> > >     vNVDIMM,
> > >      but I'm not sure yet...)
> 
>          For virtio-pmem, we are not providing advance features like namespace and various
>          other features which ACPI/NVDIMM hardware provides. This is just to keep paravirt
>          device simple.

Hmm, I see.
Thank you for your explanation.

Bye.
---
Yasunori Goto


> 
>          Moreover I have not yet looked at ndctl side of things. I am not 100% sure how
>          ndctl will handle 'virtio-pmem'.
> 
>         Adding 'Dan' in loop, he can add his thoughts.
> 
> Thanks,
> Pankaj
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-06-06  1:45 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-23  5:08 Questions about vNVDIMM on qemu/KVM Yasunori Goto
2018-05-23  5:08 ` [Qemu-devel] " Yasunori Goto
2018-05-23 18:39 ` Dan Williams
2018-05-23 18:39   ` [Qemu-devel] " Dan Williams
2018-05-24  7:19   ` Yasunori Goto
2018-05-24  7:19     ` [Qemu-devel] " Yasunori Goto
2018-05-24 14:08     ` Dan Williams
2018-05-24 14:08       ` [Qemu-devel] " Dan Williams
2018-05-25  5:02       ` Yasunori Goto
2018-05-25  5:02         ` [Qemu-devel] " Yasunori Goto
2018-06-01 11:54 ` Stefan Hajnoczi
2018-06-01 11:54   ` [Qemu-devel] " Stefan Hajnoczi
2018-06-04  8:03   ` Pankaj Gupta
2018-06-04  8:03     ` Pankaj Gupta
2018-06-06  1:44     ` Yasunori Goto
2018-06-06  1:44       ` Yasunori Goto

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.