xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Paul Durrant <paul.durrant@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>,
	george.dunlap@citrix.com, Julien Grall <julien.grall@arm.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [PATCH 4/5] iommu: introduce iommu_groups
Date: Fri, 31 May 2019 08:13:32 -0600	[thread overview]
Message-ID: <5CF1368C02000078002340B4@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <cb3fd070fa6748148dd4af032a7b6edc@AMSPEX02CL03.citrite.net>

>>> On 31.05.19 at 15:55, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 15 May 2019 15:18
>> 
>> >>> On 08.05.19 at 15:24, <paul.durrant@citrix.com> wrote:
>> > +    id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn);
>> > +    grp = get_iommu_group(id);
>> 
>> I don't think solitary devices should be allocated a group. Also
>> you don't handle failure of ops->get_device_group_id().
> 
> True, it can fail in the VT-d case. Not clear what to do in that case though; 
> I guess assume - for now - that the device gets its own group.
> I think all devices should get a group. The group will ultimately be the 
> unit of assignment to a VM and, in the best case, we *expect* each device to 
> have its own group... it's only when there are quirks, legacy bridges etc. 
> that multiple devices should end up in the same group. This is consistent 
> with Linux's IOMMU groups.

Well, I'm not worried much about consistency with Linux here, as
you're not cloning their implementation anyway (afaict). To me at
this point wrapping individual devices in groups looks like just extra
overhead with no real gain. But, granted, the gain may appear
later.

>> > --- a/xen/include/xen/pci.h
>> > +++ b/xen/include/xen/pci.h
>> > @@ -75,6 +75,9 @@ struct pci_dev {
>> >      struct list_head alldevs_list;
>> >      struct list_head domain_list;
>> >
>> > +    struct list_head grpdevs_list;
>> 
>> Does this separate list provide much value? The devices in a group
>> are going to move between two domain_list-s all in one go, so
>> once you know the domain of one you'll be able to find the rest by
>> iterating that domain's list. Is the fear that such an iteration may
>> be tens of thousands of entries long, and hence become an issue
>> when traversed? I have no idea how many PCI devices the biggest
>> systems today would have, but if traversal was an issue, then it
>> would already be with the code we've got now.
> 
> I'd prefer to keep it... It makes the re-implementation of the domctl in the 
> next patch more straightforward.

I can accept this as the positive side. But there's extra storage
needed (not much, but anyway), and the more (independent)
lists we have that devices can be on, the more likely it'll be that
one of them gets screwed up at some point (e.g. by forgetting
to remove a device from one of them prior to de-allocation).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

WARNING: multiple messages have this Message-ID (diff)
From: "Jan Beulich" <JBeulich@suse.com>
To: "Paul Durrant" <paul.durrant@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>,
	george.dunlap@citrix.com, Julien Grall <julien.grall@arm.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] iommu: introduce iommu_groups
Date: Fri, 31 May 2019 08:13:32 -0600	[thread overview]
Message-ID: <5CF1368C02000078002340B4@prv1-mh.provo.novell.com> (raw)
Message-ID: <20190531141332.NXqYnXd2SzoqgqMcroLOpzN1WWQ1zv2zok9So7jYa6o@z> (raw)
In-Reply-To: <cb3fd070fa6748148dd4af032a7b6edc@AMSPEX02CL03.citrite.net>

>>> On 31.05.19 at 15:55, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 15 May 2019 15:18
>> 
>> >>> On 08.05.19 at 15:24, <paul.durrant@citrix.com> wrote:
>> > +    id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn);
>> > +    grp = get_iommu_group(id);
>> 
>> I don't think solitary devices should be allocated a group. Also
>> you don't handle failure of ops->get_device_group_id().
> 
> True, it can fail in the VT-d case. Not clear what to do in that case though; 
> I guess assume - for now - that the device gets its own group.
> I think all devices should get a group. The group will ultimately be the 
> unit of assignment to a VM and, in the best case, we *expect* each device to 
> have its own group... it's only when there are quirks, legacy bridges etc. 
> that multiple devices should end up in the same group. This is consistent 
> with Linux's IOMMU groups.

Well, I'm not worried much about consistency with Linux here, as
you're not cloning their implementation anyway (afaict). To me at
this point wrapping individual devices in groups looks like just extra
overhead with no real gain. But, granted, the gain may appear
later.

>> > --- a/xen/include/xen/pci.h
>> > +++ b/xen/include/xen/pci.h
>> > @@ -75,6 +75,9 @@ struct pci_dev {
>> >      struct list_head alldevs_list;
>> >      struct list_head domain_list;
>> >
>> > +    struct list_head grpdevs_list;
>> 
>> Does this separate list provide much value? The devices in a group
>> are going to move between two domain_list-s all in one go, so
>> once you know the domain of one you'll be able to find the rest by
>> iterating that domain's list. Is the fear that such an iteration may
>> be tens of thousands of entries long, and hence become an issue
>> when traversed? I have no idea how many PCI devices the biggest
>> systems today would have, but if traversal was an issue, then it
>> would already be with the code we've got now.
> 
> I'd prefer to keep it... It makes the re-implementation of the domctl in the 
> next patch more straightforward.

I can accept this as the positive side. But there's extra storage
needed (not much, but anyway), and the more (independent)
lists we have that devices can be on, the more likely it'll be that
one of them gets screwed up at some point (e.g. by forgetting
to remove a device from one of them prior to de-allocation).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2019-05-31 14:13 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-08 13:23 [PATCH 0/5] iommu groups + cleanup Paul Durrant
2019-05-08 13:23 ` [Xen-devel] " Paul Durrant
2019-05-08 13:23 ` [PATCH 1/5] iommu: trivial re-organisation to avoid unnecessary test Paul Durrant
2019-05-08 13:23   ` [Xen-devel] " Paul Durrant
2019-05-13 15:22   ` Jan Beulich
2019-05-13 15:22     ` [Xen-devel] " Jan Beulich
2019-05-08 13:24 ` [PATCH 2/5] iommu / x86: move call to scan_pci_devices() out of vendor code Paul Durrant
2019-05-08 13:24   ` [Xen-devel] " Paul Durrant
2019-05-13 15:35   ` Jan Beulich
2019-05-13 15:35     ` [Xen-devel] " Jan Beulich
2019-05-14 16:13     ` Paul Durrant
2019-05-14 16:13       ` [Xen-devel] " Paul Durrant
2019-05-15  6:29       ` Jan Beulich
2019-05-15  6:29         ` [Xen-devel] " Jan Beulich
2019-05-08 13:24 ` [PATCH 3/5] iommu: move iommu_get_ops() into common code Paul Durrant
2019-05-08 13:24   ` [Xen-devel] " Paul Durrant
2019-05-13 16:11   ` Jan Beulich
2019-05-13 16:11     ` [Xen-devel] " Jan Beulich
2019-05-14 16:19     ` Paul Durrant
2019-05-14 16:19       ` [Xen-devel] " Paul Durrant
2019-05-14 21:36       ` Julien Grall
2019-05-14 21:36         ` [Xen-devel] " Julien Grall
2019-05-15  6:32       ` Jan Beulich
2019-05-15  6:32         ` [Xen-devel] " Jan Beulich
2019-05-08 13:24 ` [PATCH 4/5] iommu: introduce iommu_groups Paul Durrant
2019-05-08 13:24   ` [Xen-devel] " Paul Durrant
2019-05-15  8:44   ` Roger Pau Monné
2019-05-15  8:44     ` [Xen-devel] " Roger Pau Monné
2019-05-31 13:48     ` Paul Durrant
2019-05-31 13:48       ` [Xen-devel] " Paul Durrant
2019-05-15 14:17   ` Jan Beulich
2019-05-15 14:17     ` [Xen-devel] " Jan Beulich
2019-05-31 13:55     ` Paul Durrant
2019-05-31 13:55       ` [Xen-devel] " Paul Durrant
2019-05-31 14:13       ` Jan Beulich [this message]
2019-05-31 14:13         ` Jan Beulich
2019-05-31 14:21         ` Paul Durrant
2019-05-31 14:21           ` [Xen-devel] " Paul Durrant
2019-05-15 14:24   ` Jan Beulich
2019-05-15 14:24     ` [Xen-devel] " Jan Beulich
2019-05-08 13:24 ` [PATCH 5/5] iommu / pci: re-implement XEN_DOMCTL_get_device_group Paul Durrant
2019-05-08 13:24   ` [Xen-devel] " Paul Durrant
2019-05-15  9:06   ` Roger Pau Monné
2019-05-15  9:06     ` [Xen-devel] " Roger Pau Monné
2019-06-03  9:58     ` Paul Durrant
2019-06-03  9:58       ` [Xen-devel] " Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5CF1368C02000078002340B4@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=julien.grall@arm.com \
    --cc=konrad.wilk@oracle.com \
    --cc=paul.durrant@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).