From: Paul Durrant <Paul.Durrant@citrix.com> To: 'Jan Beulich' <JBeulich@suse.com> Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wei.liu2@citrix.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>, "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien.grall@arm.com>, xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <Ian.Jackson@citrix.com> Subject: Re: [PATCH 4/5] iommu: introduce iommu_groups Date: Fri, 31 May 2019 14:21:57 +0000 [thread overview] Message-ID: <effeb9ace857427993a30f5575dc3a27@AMSPEX02CL03.citrite.net> (raw) In-Reply-To: <5CF1368C02000078002340B4@prv1-mh.provo.novell.com> > -----Original Message----- [snip] > >> > --- a/xen/include/xen/pci.h > >> > +++ b/xen/include/xen/pci.h > >> > @@ -75,6 +75,9 @@ struct pci_dev { > >> > struct list_head alldevs_list; > >> > struct list_head domain_list; > >> > > >> > + struct list_head grpdevs_list; > >> > >> Does this separate list provide much value? The devices in a group > >> are going to move between two domain_list-s all in one go, so > >> once you know the domain of one you'll be able to find the rest by > >> iterating that domain's list. Is the fear that such an iteration may > >> be tens of thousands of entries long, and hence become an issue > >> when traversed? I have no idea how many PCI devices the biggest > >> systems today would have, but if traversal was an issue, then it > >> would already be with the code we've got now. > > > > I'd prefer to keep it... It makes the re-implementation of the domctl in the > > next patch more straightforward. > > I can accept this as the positive side. But there's extra storage > needed (not much, but anyway), and the more (independent) > lists we have that devices can be on, the more likely it'll be that > one of them gets screwed up at some point (e.g. by forgetting > to remove a device from one of them prior to de-allocation). Ok, I'll drop the list and just match on the grp pointer. Paul > > Jan > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
WARNING: multiple messages have this Message-ID (diff)
From: Paul Durrant <Paul.Durrant@citrix.com> To: 'Jan Beulich' <JBeulich@suse.com> Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wei.liu2@citrix.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>, "Tim \(Xen.org\)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien.grall@arm.com>, xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <Ian.Jackson@citrix.com> Subject: Re: [Xen-devel] [PATCH 4/5] iommu: introduce iommu_groups Date: Fri, 31 May 2019 14:21:57 +0000 [thread overview] Message-ID: <effeb9ace857427993a30f5575dc3a27@AMSPEX02CL03.citrite.net> (raw) Message-ID: <20190531142157.kioi36Yb3b_kbpfTiuo1WQ3aOsu98lNTPgfCCtqL28o@z> (raw) In-Reply-To: <5CF1368C02000078002340B4@prv1-mh.provo.novell.com> > -----Original Message----- [snip] > >> > --- a/xen/include/xen/pci.h > >> > +++ b/xen/include/xen/pci.h > >> > @@ -75,6 +75,9 @@ struct pci_dev { > >> > struct list_head alldevs_list; > >> > struct list_head domain_list; > >> > > >> > + struct list_head grpdevs_list; > >> > >> Does this separate list provide much value? The devices in a group > >> are going to move between two domain_list-s all in one go, so > >> once you know the domain of one you'll be able to find the rest by > >> iterating that domain's list. Is the fear that such an iteration may > >> be tens of thousands of entries long, and hence become an issue > >> when traversed? I have no idea how many PCI devices the biggest > >> systems today would have, but if traversal was an issue, then it > >> would already be with the code we've got now. > > > > I'd prefer to keep it... It makes the re-implementation of the domctl in the > > next patch more straightforward. > > I can accept this as the positive side. But there's extra storage > needed (not much, but anyway), and the more (independent) > lists we have that devices can be on, the more likely it'll be that > one of them gets screwed up at some point (e.g. by forgetting > to remove a device from one of them prior to de-allocation). Ok, I'll drop the list and just match on the grp pointer. Paul > > Jan > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2019-05-31 14:22 UTC|newest] Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-05-08 13:23 [PATCH 0/5] iommu groups + cleanup Paul Durrant 2019-05-08 13:23 ` [Xen-devel] " Paul Durrant 2019-05-08 13:23 ` [PATCH 1/5] iommu: trivial re-organisation to avoid unnecessary test Paul Durrant 2019-05-08 13:23 ` [Xen-devel] " Paul Durrant 2019-05-13 15:22 ` Jan Beulich 2019-05-13 15:22 ` [Xen-devel] " Jan Beulich 2019-05-08 13:24 ` [PATCH 2/5] iommu / x86: move call to scan_pci_devices() out of vendor code Paul Durrant 2019-05-08 13:24 ` [Xen-devel] " Paul Durrant 2019-05-13 15:35 ` Jan Beulich 2019-05-13 15:35 ` [Xen-devel] " Jan Beulich 2019-05-14 16:13 ` Paul Durrant 2019-05-14 16:13 ` [Xen-devel] " Paul Durrant 2019-05-15 6:29 ` Jan Beulich 2019-05-15 6:29 ` [Xen-devel] " Jan Beulich 2019-05-08 13:24 ` [PATCH 3/5] iommu: move iommu_get_ops() into common code Paul Durrant 2019-05-08 13:24 ` [Xen-devel] " Paul Durrant 2019-05-13 16:11 ` Jan Beulich 2019-05-13 16:11 ` [Xen-devel] " Jan Beulich 2019-05-14 16:19 ` Paul Durrant 2019-05-14 16:19 ` [Xen-devel] " Paul Durrant 2019-05-14 21:36 ` Julien Grall 2019-05-14 21:36 ` [Xen-devel] " Julien Grall 2019-05-15 6:32 ` Jan Beulich 2019-05-15 6:32 ` [Xen-devel] " Jan Beulich 2019-05-08 13:24 ` [PATCH 4/5] iommu: introduce iommu_groups Paul Durrant 2019-05-08 13:24 ` [Xen-devel] " Paul Durrant 2019-05-15 8:44 ` Roger Pau Monné 2019-05-15 8:44 ` [Xen-devel] " Roger Pau Monné 2019-05-31 13:48 ` Paul Durrant 2019-05-31 13:48 ` [Xen-devel] " Paul Durrant 2019-05-15 14:17 ` Jan Beulich 2019-05-15 14:17 ` [Xen-devel] " Jan Beulich 2019-05-31 13:55 ` Paul Durrant 2019-05-31 13:55 ` [Xen-devel] " Paul Durrant 2019-05-31 14:13 ` Jan Beulich 2019-05-31 14:13 ` [Xen-devel] " Jan Beulich 2019-05-31 14:21 ` Paul Durrant [this message] 2019-05-31 14:21 ` Paul Durrant 2019-05-15 14:24 ` Jan Beulich 2019-05-15 14:24 ` [Xen-devel] " Jan Beulich 2019-05-08 13:24 ` [PATCH 5/5] iommu / pci: re-implement XEN_DOMCTL_get_device_group Paul Durrant 2019-05-08 13:24 ` [Xen-devel] " Paul Durrant 2019-05-15 9:06 ` Roger Pau Monné 2019-05-15 9:06 ` [Xen-devel] " Roger Pau Monné 2019-06-03 9:58 ` Paul Durrant 2019-06-03 9:58 ` [Xen-devel] " Paul Durrant
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=effeb9ace857427993a30f5575dc3a27@AMSPEX02CL03.citrite.net \ --to=paul.durrant@citrix.com \ --cc=Andrew.Cooper3@citrix.com \ --cc=George.Dunlap@citrix.com \ --cc=Ian.Jackson@citrix.com \ --cc=JBeulich@suse.com \ --cc=julien.grall@arm.com \ --cc=konrad.wilk@oracle.com \ --cc=sstabellini@kernel.org \ --cc=tim@xen.org \ --cc=wei.liu2@citrix.com \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).