qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC] nvme: how to support multiple namespaces
@ 2019-06-17  8:12 Klaus Birkelund
  2019-06-20 15:37 ` Laszlo Ersek
  0 siblings, 1 reply; 13+ messages in thread
From: Klaus Birkelund @ 2019-06-17  8:12 UTC (permalink / raw)
  To: qemu-devel; +Cc: Keith Busch, Kevin Wolf, Laszlo Ersek, qemu-block, Max Reitz

Hi all,

I'm thinking about how to support multiple namespaces in the NVMe
device. My first idea was to add a "namespaces" property array to the
device that references blockdevs, but as Laszlo writes below, this might
not be the best idea. It also makes it troublesome to add per-namespace
parameters (which is something I will be required to do for other
reasons). Some of you might remember my first attempt at this that
included adding a new block driver (derived from raw) that could be
given certain parameters that would then be stored in the image. But I
understand that this is a no-go, and I can see why.

I guess the optimal way would be such that the parameters was something
like:

   -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
   -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
   -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
   -device nvme-ns,drive=blk_ns2,...
   -device nvme,...

My question is how to state the parent/child relationship between the
nvme and nvme-ns devices. I've been looking at how ide and virtio does
this, and maybe a "bus" is the right way to go?

Can anyone give any advice as to how to proceed? I have a functioning
patch that adds multiple namespaces, but it uses the "namespaces" array
method and I don't think that is the right approach.

I've copied my initial discussion with Laszlo below.


Cheers,
Klaus


On Wed, Jun 05, 2019 at 07:09:43PM +0200, Laszlo Ersek wrote:
> On 06/05/19 15:44, Klaus Birkelund wrote:
> > On Tue, Jun 04, 2019 at 06:52:38PM +0200, Laszlo Ersek wrote:
> >> Hi Klaus,
> >>
> >> On 06/04/19 14:59, Klaus Birkelund wrote:
> >>> Hi Laszlo,
> >>>
> >>> I'm implementing multiple namespace support for the NVMe device in QEMU
> >>> and I'm not sure how to handle the bootindex property.
> >>>
> >>> Your commit message from a907ec52cc1a provides great insight, but do you
> >>> have any recommendations to how the bootindex property should be
> >>> handled?
> >>>
> >>> Multiple namespaces work by having multiple -blockdevs and then using
> >>> the property array functionality to reference a list of blockdevs from
> >>> the nvme device:
> >>>
> >>>     -device nvme,serial=<serial>,len-namespaces=1,namespace[0]=<drive_id>
> >>>
> >>> A bootindex property would be global to the device. Should it just
> >>> always default to the first namespace? I'm really unsure about how the
> >>> firmware handles it.
> >>>
> >>> Hope you can shed some light on this.
> >>
> >> this is getting quite seriously into QOM and QEMU options, so I
> >> definitely suggest to take this to the list, because I'm not an expert
> >> in all that, at all :)
> >>
> >> Based on a re-reading of the commit (which I have *completely* forgotten
> >> about by now!), and based on your description, my opinion is that
> >> introducing the "namespace" property to the "nvme" device as an array is
> >> a bad fit. Because, as you say, a single device may only take a single
> >> bootindex property. If it suffices to designate at most one namespace
> >> for booting purposes, then I *guess* an extra property can be
> >> introduced, to state *which* namespace the bootindex property should
> >> apply to (and the rest of the namespaces will be ignored for that
> >> purpose). However, if it's necessary to add at least two namespaces to
> >> the boot order, then the namespaces will have to be split to distinct
> >> "-device" options.
> >>
> >> My impression is that the "namespace" property isn't upstream yet; i.e.
> >> it is your work in progress. As a "QOM noob" I would suggest introducing
> >> a new device model, called "nvme-namespace". This could have its own
> >> "bootindex" property. On the "nvme" device model's level, the currently
> >> existing "bootindex" property would become mutually exclusive with the
> >> "nvme" device having "nvme-namespace" child devices. The parent-child
> >> relationship could be expressed from either direction, i.e. either the
> >> "nvme" parent device could reference the children with the "namespace"
> >> array property (it wouldn't refer to <drive_id>s but to the IDs of
> >> "nvme-namespace" devices), or the "nvme-namespace" devices could
> >> reference the parent "nvme" device via a "bus" property or similar.
> >>
> >> The idea is that "bootindex" would have to exist at the nvme-namespace
> >> device model's level, and a parent ("bus") device would have to enforce
> >> various properties, such as no namespace ID duplication and so on.
> >>
> >> I suggest that, if/when you respond to this email, keep all context, and
> >> CC the qemu-devel list at once. (I could have done that myself right
> >> now, but didn't want to, without your permission.)
> >>
> > 
> > Hi Laszlo,
> > 
> > Thank you very much for the feedback!
> > 
> > I have a big patch series for the nvme device which the multiple
> > namespace patch builds on. I'll post the big series tomorrow I hope.
> > Then I'll post the multiple namespaces patch as an RFC and include our
> > discussion here.
> > 
> > I hadn't thought about introducing a separate device model for the
> > namespace. It's way beyond my QOM knowledge, so yeah, hopefully someone
> > on the list have some opinions on this.
> > 
> > 
> > Thanks again!
> 
> My pleasure! I'll attempt to follow the discussion (from a safe distance
> :) ) because I'm curious about the proper device model hierarchy here.
> 
> Regarding OVMF, as long as your QEMU work keeps the *structure* of the
> OpenFirmware device paths intact (and you just compose the NSID and
> EUI-64 values dynamically, in the trailing "unit address" part), OVMF
> should need no change.
> 
> Thanks!
> Laszlo


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [RFC] nvme: how to support multiple namespaces
  2019-06-17  8:12 [Qemu-devel] [RFC] nvme: how to support multiple namespaces Klaus Birkelund
@ 2019-06-20 15:37 ` Laszlo Ersek
  2019-06-24  8:01   ` [Qemu-devel] [Qemu-block] " Klaus Birkelund
  0 siblings, 1 reply; 13+ messages in thread
From: Laszlo Ersek @ 2019-06-20 15:37 UTC (permalink / raw)
  To: qemu-devel, Keith Busch, Kevin Wolf, Max Reitz, qemu-block,
	Markus Armbruster

On 06/17/19 10:12, Klaus Birkelund wrote:
> Hi all,
> 
> I'm thinking about how to support multiple namespaces in the NVMe
> device. My first idea was to add a "namespaces" property array to the
> device that references blockdevs, but as Laszlo writes below, this might
> not be the best idea. It also makes it troublesome to add per-namespace
> parameters (which is something I will be required to do for other
> reasons). Some of you might remember my first attempt at this that
> included adding a new block driver (derived from raw) that could be
> given certain parameters that would then be stored in the image. But I
> understand that this is a no-go, and I can see why.
> 
> I guess the optimal way would be such that the parameters was something
> like:
> 
>    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
>    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
>    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
>    -device nvme-ns,drive=blk_ns2,...
>    -device nvme,...
> 
> My question is how to state the parent/child relationship between the
> nvme and nvme-ns devices. I've been looking at how ide and virtio does
> this, and maybe a "bus" is the right way to go?

I've added Markus to the address list, because of this question. No
other (new) comments from me on the thread starter at this time, just
keeping the full context.

Thanks
Laszlo

> 
> Can anyone give any advice as to how to proceed? I have a functioning
> patch that adds multiple namespaces, but it uses the "namespaces" array
> method and I don't think that is the right approach.
> 
> I've copied my initial discussion with Laszlo below.
> 
> 
> Cheers,
> Klaus
> 
> 
> On Wed, Jun 05, 2019 at 07:09:43PM +0200, Laszlo Ersek wrote:
>> On 06/05/19 15:44, Klaus Birkelund wrote:
>>> On Tue, Jun 04, 2019 at 06:52:38PM +0200, Laszlo Ersek wrote:
>>>> Hi Klaus,
>>>>
>>>> On 06/04/19 14:59, Klaus Birkelund wrote:
>>>>> Hi Laszlo,
>>>>>
>>>>> I'm implementing multiple namespace support for the NVMe device in QEMU
>>>>> and I'm not sure how to handle the bootindex property.
>>>>>
>>>>> Your commit message from a907ec52cc1a provides great insight, but do you
>>>>> have any recommendations to how the bootindex property should be
>>>>> handled?
>>>>>
>>>>> Multiple namespaces work by having multiple -blockdevs and then using
>>>>> the property array functionality to reference a list of blockdevs from
>>>>> the nvme device:
>>>>>
>>>>>     -device nvme,serial=<serial>,len-namespaces=1,namespace[0]=<drive_id>
>>>>>
>>>>> A bootindex property would be global to the device. Should it just
>>>>> always default to the first namespace? I'm really unsure about how the
>>>>> firmware handles it.
>>>>>
>>>>> Hope you can shed some light on this.
>>>>
>>>> this is getting quite seriously into QOM and QEMU options, so I
>>>> definitely suggest to take this to the list, because I'm not an expert
>>>> in all that, at all :)
>>>>
>>>> Based on a re-reading of the commit (which I have *completely* forgotten
>>>> about by now!), and based on your description, my opinion is that
>>>> introducing the "namespace" property to the "nvme" device as an array is
>>>> a bad fit. Because, as you say, a single device may only take a single
>>>> bootindex property. If it suffices to designate at most one namespace
>>>> for booting purposes, then I *guess* an extra property can be
>>>> introduced, to state *which* namespace the bootindex property should
>>>> apply to (and the rest of the namespaces will be ignored for that
>>>> purpose). However, if it's necessary to add at least two namespaces to
>>>> the boot order, then the namespaces will have to be split to distinct
>>>> "-device" options.
>>>>
>>>> My impression is that the "namespace" property isn't upstream yet; i.e.
>>>> it is your work in progress. As a "QOM noob" I would suggest introducing
>>>> a new device model, called "nvme-namespace". This could have its own
>>>> "bootindex" property. On the "nvme" device model's level, the currently
>>>> existing "bootindex" property would become mutually exclusive with the
>>>> "nvme" device having "nvme-namespace" child devices. The parent-child
>>>> relationship could be expressed from either direction, i.e. either the
>>>> "nvme" parent device could reference the children with the "namespace"
>>>> array property (it wouldn't refer to <drive_id>s but to the IDs of
>>>> "nvme-namespace" devices), or the "nvme-namespace" devices could
>>>> reference the parent "nvme" device via a "bus" property or similar.
>>>>
>>>> The idea is that "bootindex" would have to exist at the nvme-namespace
>>>> device model's level, and a parent ("bus") device would have to enforce
>>>> various properties, such as no namespace ID duplication and so on.
>>>>
>>>> I suggest that, if/when you respond to this email, keep all context, and
>>>> CC the qemu-devel list at once. (I could have done that myself right
>>>> now, but didn't want to, without your permission.)
>>>>
>>>
>>> Hi Laszlo,
>>>
>>> Thank you very much for the feedback!
>>>
>>> I have a big patch series for the nvme device which the multiple
>>> namespace patch builds on. I'll post the big series tomorrow I hope.
>>> Then I'll post the multiple namespaces patch as an RFC and include our
>>> discussion here.
>>>
>>> I hadn't thought about introducing a separate device model for the
>>> namespace. It's way beyond my QOM knowledge, so yeah, hopefully someone
>>> on the list have some opinions on this.
>>>
>>>
>>> Thanks again!
>>
>> My pleasure! I'll attempt to follow the discussion (from a safe distance
>> :) ) because I'm curious about the proper device model hierarchy here.
>>
>> Regarding OVMF, as long as your QEMU work keeps the *structure* of the
>> OpenFirmware device paths intact (and you just compose the NSID and
>> EUI-64 values dynamically, in the trailing "unit address" part), OVMF
>> should need no change.
>>
>> Thanks!
>> Laszlo
> 



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-20 15:37 ` Laszlo Ersek
@ 2019-06-24  8:01   ` Klaus Birkelund
  2019-06-24 10:18     ` Kevin Wolf
  0 siblings, 1 reply; 13+ messages in thread
From: Klaus Birkelund @ 2019-06-24  8:01 UTC (permalink / raw)
  To: Laszlo Ersek
  Cc: Kevin Wolf, qemu-block, Markus Armbruster, qemu-devel,
	Keith Busch, Max Reitz

On Thu, Jun 20, 2019 at 05:37:24PM +0200, Laszlo Ersek wrote:
> On 06/17/19 10:12, Klaus Birkelund wrote:
> > Hi all,
> > 
> > I'm thinking about how to support multiple namespaces in the NVMe
> > device. My first idea was to add a "namespaces" property array to the
> > device that references blockdevs, but as Laszlo writes below, this might
> > not be the best idea. It also makes it troublesome to add per-namespace
> > parameters (which is something I will be required to do for other
> > reasons). Some of you might remember my first attempt at this that
> > included adding a new block driver (derived from raw) that could be
> > given certain parameters that would then be stored in the image. But I
> > understand that this is a no-go, and I can see why.
> > 
> > I guess the optimal way would be such that the parameters was something
> > like:
> > 
> >    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
> >    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
> >    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
> >    -device nvme-ns,drive=blk_ns2,...
> >    -device nvme,...
> > 
> > My question is how to state the parent/child relationship between the
> > nvme and nvme-ns devices. I've been looking at how ide and virtio does
> > this, and maybe a "bus" is the right way to go?
> 
> I've added Markus to the address list, because of this question. No
> other (new) comments from me on the thread starter at this time, just
> keeping the full context.
> 

Hi all,

I've succesfully implemented this by introducing a new 'nvme-ns' device
model. The nvme device creates a bus named from the device id ('id'
parameter) and the nvme-ns devices are then registered on this.

This results in an nvme device being creates like this (two namespaces
example):

  -drive file=nvme0n1.img,if=none,id=disk1
  -drive file=nvme0n2.img,if=none,id=disk2
  -device nvme,serial=deadbeef,id=nvme0
  -device nvme-ns,drive=disk1,bus=nvme0,nsid=1
  -device nvme-ns,drive=disk2,bus=nvme0,nsid=2

How does that look as a way forward?

Cheers,
Klaus


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-24  8:01   ` [Qemu-devel] [Qemu-block] " Klaus Birkelund
@ 2019-06-24 10:18     ` Kevin Wolf
  2019-06-24 20:46       ` Laszlo Ersek
  2019-06-25 16:45       ` Klaus Birkelund
  0 siblings, 2 replies; 13+ messages in thread
From: Kevin Wolf @ 2019-06-24 10:18 UTC (permalink / raw)
  To: Laszlo Ersek, qemu-devel, Keith Busch, Max Reitz, qemu-block,
	Markus Armbruster

Am 24.06.2019 um 10:01 hat Klaus Birkelund geschrieben:
> On Thu, Jun 20, 2019 at 05:37:24PM +0200, Laszlo Ersek wrote:
> > On 06/17/19 10:12, Klaus Birkelund wrote:
> > > Hi all,
> > > 
> > > I'm thinking about how to support multiple namespaces in the NVMe
> > > device. My first idea was to add a "namespaces" property array to the
> > > device that references blockdevs, but as Laszlo writes below, this might
> > > not be the best idea. It also makes it troublesome to add per-namespace
> > > parameters (which is something I will be required to do for other
> > > reasons). Some of you might remember my first attempt at this that
> > > included adding a new block driver (derived from raw) that could be
> > > given certain parameters that would then be stored in the image. But I
> > > understand that this is a no-go, and I can see why.
> > > 
> > > I guess the optimal way would be such that the parameters was something
> > > like:
> > > 
> > >    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
> > >    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
> > >    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
> > >    -device nvme-ns,drive=blk_ns2,...
> > >    -device nvme,...
> > > 
> > > My question is how to state the parent/child relationship between the
> > > nvme and nvme-ns devices. I've been looking at how ide and virtio does
> > > this, and maybe a "bus" is the right way to go?
> > 
> > I've added Markus to the address list, because of this question. No
> > other (new) comments from me on the thread starter at this time, just
> > keeping the full context.
> > 
> 
> Hi all,
> 
> I've succesfully implemented this by introducing a new 'nvme-ns' device
> model. The nvme device creates a bus named from the device id ('id'
> parameter) and the nvme-ns devices are then registered on this.
> 
> This results in an nvme device being creates like this (two namespaces
> example):
> 
>   -drive file=nvme0n1.img,if=none,id=disk1
>   -drive file=nvme0n2.img,if=none,id=disk2
>   -device nvme,serial=deadbeef,id=nvme0
>   -device nvme-ns,drive=disk1,bus=nvme0,nsid=1
>   -device nvme-ns,drive=disk2,bus=nvme0,nsid=2
> 
> How does that look as a way forward?

This looks very similar to what other devices do (one bus controller
that has multiple devices on its but), so I like it.

The thing that is special here is that -device nvme is already a block
device by itself that can take a drive property. So how does this play
together? Can I choose to either specify a drive directly for the nvme
device or nvme-ns devices, but when I do both, I will get an error? What
happens if I don't specify a drive for nvme, but also don't add nvme-ns
devices?

Kevin


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-24 10:18     ` Kevin Wolf
@ 2019-06-24 20:46       ` Laszlo Ersek
  2019-06-25  5:51         ` Markus Armbruster
  2019-06-25  7:24         ` Kevin Wolf
  2019-06-25 16:45       ` Klaus Birkelund
  1 sibling, 2 replies; 13+ messages in thread
From: Laszlo Ersek @ 2019-06-24 20:46 UTC (permalink / raw)
  To: Kevin Wolf, qemu-devel, Keith Busch, Max Reitz, qemu-block,
	Markus Armbruster

On 06/24/19 12:18, Kevin Wolf wrote:
> Am 24.06.2019 um 10:01 hat Klaus Birkelund geschrieben:
>> On Thu, Jun 20, 2019 at 05:37:24PM +0200, Laszlo Ersek wrote:
>>> On 06/17/19 10:12, Klaus Birkelund wrote:
>>>> Hi all,
>>>>
>>>> I'm thinking about how to support multiple namespaces in the NVMe
>>>> device. My first idea was to add a "namespaces" property array to the
>>>> device that references blockdevs, but as Laszlo writes below, this might
>>>> not be the best idea. It also makes it troublesome to add per-namespace
>>>> parameters (which is something I will be required to do for other
>>>> reasons). Some of you might remember my first attempt at this that
>>>> included adding a new block driver (derived from raw) that could be
>>>> given certain parameters that would then be stored in the image. But I
>>>> understand that this is a no-go, and I can see why.
>>>>
>>>> I guess the optimal way would be such that the parameters was something
>>>> like:
>>>>
>>>>    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
>>>>    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
>>>>    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
>>>>    -device nvme-ns,drive=blk_ns2,...
>>>>    -device nvme,...
>>>>
>>>> My question is how to state the parent/child relationship between the
>>>> nvme and nvme-ns devices. I've been looking at how ide and virtio does
>>>> this, and maybe a "bus" is the right way to go?
>>>
>>> I've added Markus to the address list, because of this question. No
>>> other (new) comments from me on the thread starter at this time, just
>>> keeping the full context.
>>>
>>
>> Hi all,
>>
>> I've succesfully implemented this by introducing a new 'nvme-ns' device
>> model. The nvme device creates a bus named from the device id ('id'
>> parameter) and the nvme-ns devices are then registered on this.
>>
>> This results in an nvme device being creates like this (two namespaces
>> example):
>>
>>   -drive file=nvme0n1.img,if=none,id=disk1
>>   -drive file=nvme0n2.img,if=none,id=disk2
>>   -device nvme,serial=deadbeef,id=nvme0
>>   -device nvme-ns,drive=disk1,bus=nvme0,nsid=1
>>   -device nvme-ns,drive=disk2,bus=nvme0,nsid=2
>>
>> How does that look as a way forward?
> 
> This looks very similar to what other devices do (one bus controller
> that has multiple devices on its but), so I like it.

+1

Also, I believe it's more modern nowadays to express the same example
with "blockdev" syntax, rather than "drive". (Not that I could suggest
the exact spelling for that :)) I don't expect the modern syntax to
behave differently, I just guess it's better to stick with the new in
examples / commit messages etc.

> The thing that is special here is that -device nvme is already a block
> device by itself that can take a drive property. So how does this play
> together? Can I choose to either specify a drive directly for the nvme
> device or nvme-ns devices, but when I do both, I will get an error? What
> happens if I don't specify a drive for nvme, but also don't add nvme-ns
> devices?

Great questions!

Thanks!
Laszlo


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-24 20:46       ` Laszlo Ersek
@ 2019-06-25  5:51         ` Markus Armbruster
  2019-06-25 16:47           ` Klaus Birkelund
  2019-06-25  7:24         ` Kevin Wolf
  1 sibling, 1 reply; 13+ messages in thread
From: Markus Armbruster @ 2019-06-25  5:51 UTC (permalink / raw)
  To: Laszlo Ersek; +Cc: Kevin Wolf, Keith Busch, qemu-devel, qemu-block, Max Reitz

Laszlo Ersek <lersek@redhat.com> writes:

> On 06/24/19 12:18, Kevin Wolf wrote:
>> Am 24.06.2019 um 10:01 hat Klaus Birkelund geschrieben:
>>> On Thu, Jun 20, 2019 at 05:37:24PM +0200, Laszlo Ersek wrote:
>>>> On 06/17/19 10:12, Klaus Birkelund wrote:
>>>>> Hi all,
>>>>>
>>>>> I'm thinking about how to support multiple namespaces in the NVMe
>>>>> device. My first idea was to add a "namespaces" property array to the
>>>>> device that references blockdevs, but as Laszlo writes below, this might
>>>>> not be the best idea. It also makes it troublesome to add per-namespace
>>>>> parameters (which is something I will be required to do for other
>>>>> reasons). Some of you might remember my first attempt at this that
>>>>> included adding a new block driver (derived from raw) that could be
>>>>> given certain parameters that would then be stored in the image. But I
>>>>> understand that this is a no-go, and I can see why.
>>>>>
>>>>> I guess the optimal way would be such that the parameters was something
>>>>> like:
>>>>>
>>>>>    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
>>>>>    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
>>>>>    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
>>>>>    -device nvme-ns,drive=blk_ns2,...
>>>>>    -device nvme,...
>>>>>
>>>>> My question is how to state the parent/child relationship between the
>>>>> nvme and nvme-ns devices. I've been looking at how ide and virtio does
>>>>> this, and maybe a "bus" is the right way to go?
>>>>
>>>> I've added Markus to the address list, because of this question. No
>>>> other (new) comments from me on the thread starter at this time, just
>>>> keeping the full context.
>>>>
>>>
>>> Hi all,
>>>
>>> I've succesfully implemented this by introducing a new 'nvme-ns' device
>>> model. The nvme device creates a bus named from the device id ('id'
>>> parameter) and the nvme-ns devices are then registered on this.
>>>
>>> This results in an nvme device being creates like this (two namespaces
>>> example):
>>>
>>>   -drive file=nvme0n1.img,if=none,id=disk1
>>>   -drive file=nvme0n2.img,if=none,id=disk2
>>>   -device nvme,serial=deadbeef,id=nvme0
>>>   -device nvme-ns,drive=disk1,bus=nvme0,nsid=1
>>>   -device nvme-ns,drive=disk2,bus=nvme0,nsid=2
>>>
>>> How does that look as a way forward?
>> 
>> This looks very similar to what other devices do (one bus controller
>> that has multiple devices on its but), so I like it.

Devices can be wired together without a bus intermediary.  You
definitely want a bus when the physical connection you model has one.
If not, a bus may be useful anyway, say because it provides a convenient
way to encapsulate the connection model, or to support -device bus=...

> +1
>
> Also, I believe it's more modern nowadays to express the same example
> with "blockdev" syntax, rather than "drive". (Not that I could suggest
> the exact spelling for that :)) I don't expect the modern syntax to
> behave differently, I just guess it's better to stick with the new in
> examples / commit messages etc.

Management applications should move to -blockdev.  -drive has too much
bad magic sticking to it.

We're not urging humans to switch, at least not yet.  We may want to
provide convenience features on top of plain -blockdev before we do.

As far as I know, we don't yet eschew -drive in documentation or commit
messages.  Perhaps we should consider such a policy for documentation.

[...]


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-24 20:46       ` Laszlo Ersek
  2019-06-25  5:51         ` Markus Armbruster
@ 2019-06-25  7:24         ` Kevin Wolf
  1 sibling, 0 replies; 13+ messages in thread
From: Kevin Wolf @ 2019-06-25  7:24 UTC (permalink / raw)
  To: Laszlo Ersek
  Cc: Keith Busch, Markus Armbruster, qemu-devel, qemu-block, Max Reitz

Am 24.06.2019 um 22:46 hat Laszlo Ersek geschrieben:
> On 06/24/19 12:18, Kevin Wolf wrote:
> > Am 24.06.2019 um 10:01 hat Klaus Birkelund geschrieben:
> >> On Thu, Jun 20, 2019 at 05:37:24PM +0200, Laszlo Ersek wrote:
> >>> On 06/17/19 10:12, Klaus Birkelund wrote:
> >>>> Hi all,
> >>>>
> >>>> I'm thinking about how to support multiple namespaces in the NVMe
> >>>> device. My first idea was to add a "namespaces" property array to the
> >>>> device that references blockdevs, but as Laszlo writes below, this might
> >>>> not be the best idea. It also makes it troublesome to add per-namespace
> >>>> parameters (which is something I will be required to do for other
> >>>> reasons). Some of you might remember my first attempt at this that
> >>>> included adding a new block driver (derived from raw) that could be
> >>>> given certain parameters that would then be stored in the image. But I
> >>>> understand that this is a no-go, and I can see why.
> >>>>
> >>>> I guess the optimal way would be such that the parameters was something
> >>>> like:
> >>>>
> >>>>    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
> >>>>    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
> >>>>    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
> >>>>    -device nvme-ns,drive=blk_ns2,...
> >>>>    -device nvme,...
> >>>>
> >>>> My question is how to state the parent/child relationship between the
> >>>> nvme and nvme-ns devices. I've been looking at how ide and virtio does
> >>>> this, and maybe a "bus" is the right way to go?
> >>>
> >>> I've added Markus to the address list, because of this question. No
> >>> other (new) comments from me on the thread starter at this time, just
> >>> keeping the full context.
> >>>
> >>
> >> Hi all,
> >>
> >> I've succesfully implemented this by introducing a new 'nvme-ns' device
> >> model. The nvme device creates a bus named from the device id ('id'
> >> parameter) and the nvme-ns devices are then registered on this.
> >>
> >> This results in an nvme device being creates like this (two namespaces
> >> example):
> >>
> >>   -drive file=nvme0n1.img,if=none,id=disk1
> >>   -drive file=nvme0n2.img,if=none,id=disk2
> >>   -device nvme,serial=deadbeef,id=nvme0
> >>   -device nvme-ns,drive=disk1,bus=nvme0,nsid=1
> >>   -device nvme-ns,drive=disk2,bus=nvme0,nsid=2
> >>
> >> How does that look as a way forward?
> > 
> > This looks very similar to what other devices do (one bus controller
> > that has multiple devices on its but), so I like it.
> 
> +1
> 
> Also, I believe it's more modern nowadays to express the same example
> with "blockdev" syntax, rather than "drive". (Not that I could suggest
> the exact spelling for that :)) I don't expect the modern syntax to
> behave differently, I just guess it's better to stick with the new in
> examples / commit messages etc.

As this example uses only raw files, it's actually pretty simple:

-blockdev driver=file,filename=nvme0n1.img,node-name=disk1
-blockdev driver=file,filename=nvme0n2.img,node-name=disk2

The -device options stay the same, their drive=... value just refers to
the node-name now. (-drive IDs and node-names have a shared namespace,
so this is unambiguous.)

For the sake of completeness, if nvme0n1.img were actually a qcow2
image, you would add a second -blockdev for the format layer:

-blockdev driver=file,filename=nvme0n1.img,node-name=disk1-file
-blockdev driver=qcow2,file=disk1-file,node-name=disk1

Kevin


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-24 10:18     ` Kevin Wolf
  2019-06-24 20:46       ` Laszlo Ersek
@ 2019-06-25 16:45       ` Klaus Birkelund
  2019-06-26  4:54         ` Markus Armbruster
  1 sibling, 1 reply; 13+ messages in thread
From: Klaus Birkelund @ 2019-06-25 16:45 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: qemu-block, Markus Armbruster, qemu-devel, Keith Busch,
	Max Reitz, Laszlo Ersek

On Mon, Jun 24, 2019 at 12:18:45PM +0200, Kevin Wolf wrote:
> Am 24.06.2019 um 10:01 hat Klaus Birkelund geschrieben:
> > On Thu, Jun 20, 2019 at 05:37:24PM +0200, Laszlo Ersek wrote:
> > > On 06/17/19 10:12, Klaus Birkelund wrote:
> > > > Hi all,
> > > > 
> > > > I'm thinking about how to support multiple namespaces in the NVMe
> > > > device. My first idea was to add a "namespaces" property array to the
> > > > device that references blockdevs, but as Laszlo writes below, this might
> > > > not be the best idea. It also makes it troublesome to add per-namespace
> > > > parameters (which is something I will be required to do for other
> > > > reasons). Some of you might remember my first attempt at this that
> > > > included adding a new block driver (derived from raw) that could be
> > > > given certain parameters that would then be stored in the image. But I
> > > > understand that this is a no-go, and I can see why.
> > > > 
> > > > I guess the optimal way would be such that the parameters was something
> > > > like:
> > > > 
> > > >    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
> > > >    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
> > > >    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
> > > >    -device nvme-ns,drive=blk_ns2,...
> > > >    -device nvme,...
> > > > 
> > > > My question is how to state the parent/child relationship between the
> > > > nvme and nvme-ns devices. I've been looking at how ide and virtio does
> > > > this, and maybe a "bus" is the right way to go?
> > > 
> > > I've added Markus to the address list, because of this question. No
> > > other (new) comments from me on the thread starter at this time, just
> > > keeping the full context.
> > > 
> > 
> > Hi all,
> > 
> > I've succesfully implemented this by introducing a new 'nvme-ns' device
> > model. The nvme device creates a bus named from the device id ('id'
> > parameter) and the nvme-ns devices are then registered on this.
> > 
> > This results in an nvme device being creates like this (two namespaces
> > example):
> > 
> >   -drive file=nvme0n1.img,if=none,id=disk1
> >   -drive file=nvme0n2.img,if=none,id=disk2
> >   -device nvme,serial=deadbeef,id=nvme0
> >   -device nvme-ns,drive=disk1,bus=nvme0,nsid=1
> >   -device nvme-ns,drive=disk2,bus=nvme0,nsid=2
> > 
> > How does that look as a way forward?
> 
> This looks very similar to what other devices do (one bus controller
> that has multiple devices on its but), so I like it.
> 
> The thing that is special here is that -device nvme is already a block
> device by itself that can take a drive property. So how does this play
> together? Can I choose to either specify a drive directly for the nvme
> device or nvme-ns devices, but when I do both, I will get an error? What
> happens if I don't specify a drive for nvme, but also don't add nvme-ns
> devices?
> 

Hi Kevin,

Yes, the nvme device is already a block device. My current patch removes
that property from the nvme device. I guess this breaks backward
compatibiltiy. We could accept a drive for the nvme device only if no
nvme-ns devices are configured and connected on the bus.

I'm not entirely sure on the spec, but my gut tells me that an nvme
device without any namespaces is technically a valid device, although it
is a bit useless.

I will post my patch (as part of a larger series) and we can discuss it
there.

Thanks for the feedback!

Klaus


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-25  5:51         ` Markus Armbruster
@ 2019-06-25 16:47           ` Klaus Birkelund
  2019-06-26  4:46             ` Markus Armbruster
  0 siblings, 1 reply; 13+ messages in thread
From: Klaus Birkelund @ 2019-06-25 16:47 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Kevin Wolf, qemu-block, qemu-devel, Max Reitz, Keith Busch, Laszlo Ersek

On Tue, Jun 25, 2019 at 07:51:29AM +0200, Markus Armbruster wrote:
> Laszlo Ersek <lersek@redhat.com> writes:
> 
> > On 06/24/19 12:18, Kevin Wolf wrote:
> >> Am 24.06.2019 um 10:01 hat Klaus Birkelund geschrieben:
> >>> On Thu, Jun 20, 2019 at 05:37:24PM +0200, Laszlo Ersek wrote:
> >>>> On 06/17/19 10:12, Klaus Birkelund wrote:
> >>>>> Hi all,
> >>>>>
> >>>>> I'm thinking about how to support multiple namespaces in the NVMe
> >>>>> device. My first idea was to add a "namespaces" property array to the
> >>>>> device that references blockdevs, but as Laszlo writes below, this might
> >>>>> not be the best idea. It also makes it troublesome to add per-namespace
> >>>>> parameters (which is something I will be required to do for other
> >>>>> reasons). Some of you might remember my first attempt at this that
> >>>>> included adding a new block driver (derived from raw) that could be
> >>>>> given certain parameters that would then be stored in the image. But I
> >>>>> understand that this is a no-go, and I can see why.
> >>>>>
> >>>>> I guess the optimal way would be such that the parameters was something
> >>>>> like:
> >>>>>
> >>>>>    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
> >>>>>    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
> >>>>>    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
> >>>>>    -device nvme-ns,drive=blk_ns2,...
> >>>>>    -device nvme,...
> >>>>>
> >>>>> My question is how to state the parent/child relationship between the
> >>>>> nvme and nvme-ns devices. I've been looking at how ide and virtio does
> >>>>> this, and maybe a "bus" is the right way to go?
> >>>>
> >>>> I've added Markus to the address list, because of this question. No
> >>>> other (new) comments from me on the thread starter at this time, just
> >>>> keeping the full context.
> >>>>
> >>>
> >>> Hi all,
> >>>
> >>> I've succesfully implemented this by introducing a new 'nvme-ns' device
> >>> model. The nvme device creates a bus named from the device id ('id'
> >>> parameter) and the nvme-ns devices are then registered on this.
> >>>
> >>> This results in an nvme device being creates like this (two namespaces
> >>> example):
> >>>
> >>>   -drive file=nvme0n1.img,if=none,id=disk1
> >>>   -drive file=nvme0n2.img,if=none,id=disk2
> >>>   -device nvme,serial=deadbeef,id=nvme0
> >>>   -device nvme-ns,drive=disk1,bus=nvme0,nsid=1
> >>>   -device nvme-ns,drive=disk2,bus=nvme0,nsid=2
> >>>
> >>> How does that look as a way forward?
> >> 
> >> This looks very similar to what other devices do (one bus controller
> >> that has multiple devices on its but), so I like it.
> 
> Devices can be wired together without a bus intermediary.  You
> definitely want a bus when the physical connection you model has one.
> If not, a bus may be useful anyway, say because it provides a convenient
> way to encapsulate the connection model, or to support -device bus=...
> 
 
I'm not sure how to wire it together without the bus abstraction? So
I'll stick with the bus for now. It *is* extremely convenient!

Cheers,
Klaus


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-25 16:47           ` Klaus Birkelund
@ 2019-06-26  4:46             ` Markus Armbruster
  2019-06-26 10:14               ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Markus Armbruster @ 2019-06-26  4:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Max Reitz, Keith Busch,
	Paolo Bonzini, Laszlo Ersek

Cc: QOM maintainers in case I'm talking nonsense about QOM.

Klaus Birkelund <klaus@birkelund.eu> writes:

> On Tue, Jun 25, 2019 at 07:51:29AM +0200, Markus Armbruster wrote:
>> Laszlo Ersek <lersek@redhat.com> writes:
>> 
>> > On 06/24/19 12:18, Kevin Wolf wrote:
>> >> Am 24.06.2019 um 10:01 hat Klaus Birkelund geschrieben:
>> >>> On Thu, Jun 20, 2019 at 05:37:24PM +0200, Laszlo Ersek wrote:
>> >>>> On 06/17/19 10:12, Klaus Birkelund wrote:
>> >>>>> Hi all,
>> >>>>>
>> >>>>> I'm thinking about how to support multiple namespaces in the NVMe
>> >>>>> device. My first idea was to add a "namespaces" property array to the
>> >>>>> device that references blockdevs, but as Laszlo writes below, this might
>> >>>>> not be the best idea. It also makes it troublesome to add per-namespace
>> >>>>> parameters (which is something I will be required to do for other
>> >>>>> reasons). Some of you might remember my first attempt at this that
>> >>>>> included adding a new block driver (derived from raw) that could be
>> >>>>> given certain parameters that would then be stored in the image. But I
>> >>>>> understand that this is a no-go, and I can see why.
>> >>>>>
>> >>>>> I guess the optimal way would be such that the parameters was something
>> >>>>> like:
>> >>>>>
>> >>>>>    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
>> >>>>>    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
>> >>>>>    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
>> >>>>>    -device nvme-ns,drive=blk_ns2,...
>> >>>>>    -device nvme,...
>> >>>>>
>> >>>>> My question is how to state the parent/child relationship between the
>> >>>>> nvme and nvme-ns devices. I've been looking at how ide and virtio does
>> >>>>> this, and maybe a "bus" is the right way to go?
>> >>>>
>> >>>> I've added Markus to the address list, because of this question. No
>> >>>> other (new) comments from me on the thread starter at this time, just
>> >>>> keeping the full context.
>> >>>>
>> >>>
>> >>> Hi all,
>> >>>
>> >>> I've succesfully implemented this by introducing a new 'nvme-ns' device
>> >>> model. The nvme device creates a bus named from the device id ('id'
>> >>> parameter) and the nvme-ns devices are then registered on this.
>> >>>
>> >>> This results in an nvme device being creates like this (two namespaces
>> >>> example):
>> >>>
>> >>>   -drive file=nvme0n1.img,if=none,id=disk1
>> >>>   -drive file=nvme0n2.img,if=none,id=disk2
>> >>>   -device nvme,serial=deadbeef,id=nvme0
>> >>>   -device nvme-ns,drive=disk1,bus=nvme0,nsid=1
>> >>>   -device nvme-ns,drive=disk2,bus=nvme0,nsid=2
>> >>>
>> >>> How does that look as a way forward?
>> >> 
>> >> This looks very similar to what other devices do (one bus controller
>> >> that has multiple devices on its but), so I like it.
>> 
>> Devices can be wired together without a bus intermediary.  You
>> definitely want a bus when the physical connection you model has one.
>> If not, a bus may be useful anyway, say because it provides a convenient
>> way to encapsulate the connection model, or to support -device bus=...
>> 
>  
> I'm not sure how to wire it together without the bus abstraction? So
> I'll stick with the bus for now. It *is* extremely convenient!

As far as I can tell offhand, a common use of bus-less connections
between devices is wiring together composite devices.  Example:

    static void designware_pcie_host_init(Object *obj)
    {
        DesignwarePCIEHost *s = DESIGNWARE_PCIE_HOST(obj);
        DesignwarePCIERoot *root = &s->root;

        object_initialize_child(obj, "root",  root, sizeof(*root),
                                TYPE_DESIGNWARE_PCIE_ROOT, &error_abort, NULL);
        qdev_prop_set_int32(DEVICE(root), "addr", PCI_DEVFN(0, 0));
        qdev_prop_set_bit(DEVICE(root), "multifunction", false);
    }

This creates a TYPE_DESIGNWARE_PCIE_ROOT device "within" the
TYPE_DESIGNWARE_PCIE_HOST device.

Bus-less connections between separate devices (i.e. neither device is a
part of the other) are also possible.  But I'm failing at grep right
now.  Here's an example for connecting a device to a machine:

    static void mch_realize(PCIDevice *d, Error **errp)
    {
        int i;
        MCHPCIState *mch = MCH_PCI_DEVICE(d);

        [...]
        object_property_add_const_link(qdev_get_machine(), "smram",
                                       OBJECT(&mch->smram), &error_abort);
        [...]
    }

Paolo, can you provide guidance on when to use a bus, and when not to?


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-25 16:45       ` Klaus Birkelund
@ 2019-06-26  4:54         ` Markus Armbruster
  0 siblings, 0 replies; 13+ messages in thread
From: Markus Armbruster @ 2019-06-26  4:54 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Keith Busch, Laszlo Ersek, qemu-block, Max Reitz

Klaus Birkelund <klaus@birkelund.eu> writes:

> On Mon, Jun 24, 2019 at 12:18:45PM +0200, Kevin Wolf wrote:
>> Am 24.06.2019 um 10:01 hat Klaus Birkelund geschrieben:
>> > On Thu, Jun 20, 2019 at 05:37:24PM +0200, Laszlo Ersek wrote:
>> > > On 06/17/19 10:12, Klaus Birkelund wrote:
>> > > > Hi all,
>> > > > 
>> > > > I'm thinking about how to support multiple namespaces in the NVMe
>> > > > device. My first idea was to add a "namespaces" property array to the
>> > > > device that references blockdevs, but as Laszlo writes below, this might
>> > > > not be the best idea. It also makes it troublesome to add per-namespace
>> > > > parameters (which is something I will be required to do for other
>> > > > reasons). Some of you might remember my first attempt at this that
>> > > > included adding a new block driver (derived from raw) that could be
>> > > > given certain parameters that would then be stored in the image. But I
>> > > > understand that this is a no-go, and I can see why.
>> > > > 
>> > > > I guess the optimal way would be such that the parameters was something
>> > > > like:
>> > > > 
>> > > >    -blockdev raw,node-name=blk_ns1,file.driver=file,file.filename=blk_ns1.img
>> > > >    -blockdev raw,node-name=blk_ns2,file.driver=file,file.filename=blk_ns2.img
>> > > >    -device nvme-ns,drive=blk_ns1,ns-specific-options (nsfeat,mc,dlfeat)...
>> > > >    -device nvme-ns,drive=blk_ns2,...
>> > > >    -device nvme,...
>> > > > 
>> > > > My question is how to state the parent/child relationship between the
>> > > > nvme and nvme-ns devices. I've been looking at how ide and virtio does
>> > > > this, and maybe a "bus" is the right way to go?
>> > > 
>> > > I've added Markus to the address list, because of this question. No
>> > > other (new) comments from me on the thread starter at this time, just
>> > > keeping the full context.
>> > > 
>> > 
>> > Hi all,
>> > 
>> > I've succesfully implemented this by introducing a new 'nvme-ns' device
>> > model. The nvme device creates a bus named from the device id ('id'
>> > parameter) and the nvme-ns devices are then registered on this.
>> > 
>> > This results in an nvme device being creates like this (two namespaces
>> > example):
>> > 
>> >   -drive file=nvme0n1.img,if=none,id=disk1
>> >   -drive file=nvme0n2.img,if=none,id=disk2
>> >   -device nvme,serial=deadbeef,id=nvme0
>> >   -device nvme-ns,drive=disk1,bus=nvme0,nsid=1
>> >   -device nvme-ns,drive=disk2,bus=nvme0,nsid=2
>> > 
>> > How does that look as a way forward?
>> 
>> This looks very similar to what other devices do (one bus controller
>> that has multiple devices on its but), so I like it.
>> 
>> The thing that is special here is that -device nvme is already a block
>> device by itself that can take a drive property. So how does this play
>> together? Can I choose to either specify a drive directly for the nvme
>> device or nvme-ns devices, but when I do both, I will get an error? What
>> happens if I don't specify a drive for nvme, but also don't add nvme-ns
>> devices?
>> 
>
> Hi Kevin,
>
> Yes, the nvme device is already a block device. My current patch removes
> that property from the nvme device. I guess this breaks backward
> compatibiltiy. We could accept a drive for the nvme device only if no
> nvme-ns devices are configured and connected on the bus.

Sounds awful :)

> I'm not entirely sure on the spec, but my gut tells me that an nvme
> device without any namespaces is technically a valid device, although it
> is a bit useless.

So maybe the device actually consists of a controller part (no drive
property) and namespace parts (one drive property each).

If yes, then the existing nvme device model is flawed.  Suggest to
deprecate and start over.  This should be possible without duplicating
code.

The alternative is bad magic, like the one you sketched above.  We
usually come to regret such magic.

Whether the controller device should be a composite device containing
the namespace parts is a separate question.

> I will post my patch (as part of a larger series) and we can discuss it
> there.

Yes, please.

> Thanks for the feedback!
>
> Klaus


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-26  4:46             ` Markus Armbruster
@ 2019-06-26 10:14               ` Paolo Bonzini
  2019-06-26 16:57                 ` Klaus Birkelund
  0 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2019-06-26 10:14 UTC (permalink / raw)
  To: Markus Armbruster, qemu-devel
  Cc: Kevin Wolf, Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Max Reitz, Keith Busch,
	Laszlo Ersek

On 26/06/19 06:46, Markus Armbruster wrote:
>> I'm not sure how to wire it together without the bus abstraction? So
>> I'll stick with the bus for now. It *is* extremely convenient!
> 
> As far as I can tell offhand, a common use of bus-less connections
> between devices is wiring together composite devices.  Example:
> 
>     static void designware_pcie_host_init(Object *obj)
>     {
>         DesignwarePCIEHost *s = DESIGNWARE_PCIE_HOST(obj);
>         DesignwarePCIERoot *root = &s->root;
> 
>         object_initialize_child(obj, "root",  root, sizeof(*root),
>                                 TYPE_DESIGNWARE_PCIE_ROOT, &error_abort, NULL);
>         qdev_prop_set_int32(DEVICE(root), "addr", PCI_DEVFN(0, 0));
>         qdev_prop_set_bit(DEVICE(root), "multifunction", false);
>     }
> 
> This creates a TYPE_DESIGNWARE_PCIE_ROOT device "within" the
> TYPE_DESIGNWARE_PCIE_HOST device.
> 
> Bus-less connections between separate devices (i.e. neither device is a
> part of the other) are also possible.  But I'm failing at grep right
> now.  Here's an example for connecting a device to a machine:
> 
>     static void mch_realize(PCIDevice *d, Error **errp)
>     {
>         int i;
>         MCHPCIState *mch = MCH_PCI_DEVICE(d);
> 
>         [...]
>         object_property_add_const_link(qdev_get_machine(), "smram",
>                                        OBJECT(&mch->smram), &error_abort);
>         [...]
>     }

This is a link to a memory region.  A connection to a separate device
can be found in hw/dma/xilinx_axidma.c and hw/net/xilinx_axienet.c,
where you have

         data stream <------------> data stream
       /                                        \
   dma                                            enet
       \                                        /
         control stream <------> control stream

where the horizontal links in the middle are set up by board code, while
the diagonal lines on the side are set up by device code.

> Paolo, can you provide guidance on when to use a bus, and when not to?

I would definitely use a bus if 1) it is common for the user (and not
for machine code) to set up the connection 2) the relationship is
parent-child.  Link properties are basically unused on the command line,
and it only makes sense to make something different if the connection is
some kind of graph so bus-child does not cut it.

Paolo


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [RFC] nvme: how to support multiple namespaces
  2019-06-26 10:14               ` Paolo Bonzini
@ 2019-06-26 16:57                 ` Klaus Birkelund
  0 siblings, 0 replies; 13+ messages in thread
From: Klaus Birkelund @ 2019-06-26 16:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Daniel P. Berrangé,
	Eduardo Habkost, qemu-block, Markus Armbruster, qemu-devel,
	Keith Busch, Max Reitz, Laszlo Ersek

On Wed, Jun 26, 2019 at 12:14:15PM +0200, Paolo Bonzini wrote:
> On 26/06/19 06:46, Markus Armbruster wrote:
> >> I'm not sure how to wire it together without the bus abstraction? So
> >> I'll stick with the bus for now. It *is* extremely convenient!
> > 
> > As far as I can tell offhand, a common use of bus-less connections
> > between devices is wiring together composite devices.  Example:
> > 
> >     static void designware_pcie_host_init(Object *obj)
> >     {
> >         DesignwarePCIEHost *s = DESIGNWARE_PCIE_HOST(obj);
> >         DesignwarePCIERoot *root = &s->root;
> > 
> >         object_initialize_child(obj, "root",  root, sizeof(*root),
> >                                 TYPE_DESIGNWARE_PCIE_ROOT, &error_abort, NULL);
> >         qdev_prop_set_int32(DEVICE(root), "addr", PCI_DEVFN(0, 0));
> >         qdev_prop_set_bit(DEVICE(root), "multifunction", false);
> >     }
> > 
> > This creates a TYPE_DESIGNWARE_PCIE_ROOT device "within" the
> > TYPE_DESIGNWARE_PCIE_HOST device.
> > 
> > Bus-less connections between separate devices (i.e. neither device is a
> > part of the other) are also possible.  But I'm failing at grep right
> > now.  Here's an example for connecting a device to a machine:
> > 
> >     static void mch_realize(PCIDevice *d, Error **errp)
> >     {
> >         int i;
> >         MCHPCIState *mch = MCH_PCI_DEVICE(d);
> > 
> >         [...]
> >         object_property_add_const_link(qdev_get_machine(), "smram",
> >                                        OBJECT(&mch->smram), &error_abort);
> >         [...]
> >     }
> 
> This is a link to a memory region.  A connection to a separate device
> can be found in hw/dma/xilinx_axidma.c and hw/net/xilinx_axienet.c,
> where you have
> 
>          data stream <------------> data stream
>        /                                        \
>    dma                                            enet
>        \                                        /
>          control stream <------> control stream
> 
> where the horizontal links in the middle are set up by board code, while
> the diagonal lines on the side are set up by device code.
> 
> > Paolo, can you provide guidance on when to use a bus, and when not to?
> 
> I would definitely use a bus if 1) it is common for the user (and not
> for machine code) to set up the connection 2) the relationship is
> parent-child.  Link properties are basically unused on the command line,
> and it only makes sense to make something different if the connection is
> some kind of graph so bus-child does not cut it.
> 

Definitely looks like the bus is the way to go. The controller/namespace
relationship is strictly parent-child.

Thanks both of you for the advice!


Klaus


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-06-26 17:00 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-17  8:12 [Qemu-devel] [RFC] nvme: how to support multiple namespaces Klaus Birkelund
2019-06-20 15:37 ` Laszlo Ersek
2019-06-24  8:01   ` [Qemu-devel] [Qemu-block] " Klaus Birkelund
2019-06-24 10:18     ` Kevin Wolf
2019-06-24 20:46       ` Laszlo Ersek
2019-06-25  5:51         ` Markus Armbruster
2019-06-25 16:47           ` Klaus Birkelund
2019-06-26  4:46             ` Markus Armbruster
2019-06-26 10:14               ` Paolo Bonzini
2019-06-26 16:57                 ` Klaus Birkelund
2019-06-25  7:24         ` Kevin Wolf
2019-06-25 16:45       ` Klaus Birkelund
2019-06-26  4:54         ` Markus Armbruster

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).