All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFCv2 0/3] vmd irq list shortening, map allocation
@ 2016-09-02 17:53 Jon Derrick
  2016-09-02 17:53 ` [RFCv2 1/3] vmd: eliminate vmd_vector member from list type Jon Derrick
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Jon Derrick @ 2016-09-02 17:53 UTC (permalink / raw)
  To: helgaas; +Cc: Jon Derrick, keith.busch, linux-pci

V2:
Added a map for vmd irqs to attempt to allocate all vmd irqs within an
irq list into a single page. Once we start getting many devices
sharing the irq in an irq list, this may help the list traversal latency.

V1:
Couple of RFC patches here. I don't really notice a positive benefit but
it does reduce the struct size of vmd_irq_list and hopefully we gain some
cache benefits from that.

Both are based on:
https://patchwork.kernel.org/patch/9304179/
https://patchwork.kernel.org/patch/9304181/

Jon Derrick (3):
  vmd: eliminate vmd_vector member from list type
  vmd: eliminate index member from irq list
  pci/vmd: Create irq map for irq nodes

 arch/x86/pci/vmd.c | 94 ++++++++++++++++++++++++++++++++++++++++++------------
 1 file changed, 73 insertions(+), 21 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFCv2 1/3] vmd: eliminate vmd_vector member from list type
  2016-09-02 17:53 [RFCv2 0/3] vmd irq list shortening, map allocation Jon Derrick
@ 2016-09-02 17:53 ` Jon Derrick
  2016-09-02 17:53 ` [RFCv2 2/3] vmd: eliminate index member from irq list Jon Derrick
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Jon Derrick @ 2016-09-02 17:53 UTC (permalink / raw)
  To: helgaas; +Cc: Jon Derrick, keith.busch, linux-pci

Eliminate unused vmd and vector members from vmd_irq_list and discover
the vector using pci_irq_vector().

Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 arch/x86/pci/vmd.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/x86/pci/vmd.c b/arch/x86/pci/vmd.c
index 4c0eac7..9320216 100644
--- a/arch/x86/pci/vmd.c
+++ b/arch/x86/pci/vmd.c
@@ -58,15 +58,12 @@ struct vmd_irq {
 /**
  * struct vmd_irq_list - list of driver requested IRQs mapping to a VMD vector
  * @irq_list:	the list of irq's the VMD one demuxes to.
- * @vmd_vector:	the h/w IRQ assigned to the VMD.
  * @index:	index into the VMD MSI-X table; used for message routing.
  * @count:	number of child IRQs assigned to this vector; used to track
  *		sharing.
  */
 struct vmd_irq_list {
 	struct list_head	irq_list;
-	struct vmd_dev		*vmd;
-	unsigned int		vmd_vector;
 	unsigned int		index;
 	unsigned int		count;
 };
@@ -207,8 +204,10 @@ static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info,
 	vmdirq->irq = vmd_next_irq(vmd, desc);
 	vmdirq->virq = virq;
 
-	irq_domain_set_info(domain, virq, vmdirq->irq->vmd_vector, info->chip,
-			    vmdirq, handle_untracked_irq, vmd, NULL);
+	irq_domain_set_info(domain, virq,
+			    pci_irq_vector(vmd->dev, vmdirq->irq->index),
+			    info->chip, vmdirq,
+			    handle_untracked_irq, vmd, NULL);
 	return 0;
 }
 
@@ -690,10 +689,8 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
 
 	for (i = 0; i < vmd->msix_count; i++) {
 		INIT_LIST_HEAD(&vmd->irqs[i].irq_list);
-		vmd->irqs[i].vmd_vector = pci_irq_vector(dev, i);
 		vmd->irqs[i].index = i;
-
-		err = devm_request_irq(&dev->dev, vmd->irqs[i].vmd_vector,
+		err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i),
 				       vmd_irq, 0, "vmd", &vmd->irqs[i]);
 		if (err)
 			return err;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFCv2 2/3] vmd: eliminate index member from irq list
  2016-09-02 17:53 [RFCv2 0/3] vmd irq list shortening, map allocation Jon Derrick
  2016-09-02 17:53 ` [RFCv2 1/3] vmd: eliminate vmd_vector member from list type Jon Derrick
@ 2016-09-02 17:53 ` Jon Derrick
  2016-09-02 17:53 ` [RFCv2 3/3] pci/vmd: Create irq map for irq nodes Jon Derrick
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Jon Derrick @ 2016-09-02 17:53 UTC (permalink / raw)
  To: helgaas; +Cc: Jon Derrick, keith.busch, linux-pci

Use math to discover the irq list index number relative to the irq list
head.

Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 arch/x86/pci/vmd.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/x86/pci/vmd.c b/arch/x86/pci/vmd.c
index 9320216..aa8d74e 100644
--- a/arch/x86/pci/vmd.c
+++ b/arch/x86/pci/vmd.c
@@ -58,13 +58,11 @@ struct vmd_irq {
 /**
  * struct vmd_irq_list - list of driver requested IRQs mapping to a VMD vector
  * @irq_list:	the list of irq's the VMD one demuxes to.
- * @index:	index into the VMD MSI-X table; used for message routing.
  * @count:	number of child IRQs assigned to this vector; used to track
  *		sharing.
  */
 struct vmd_irq_list {
 	struct list_head	irq_list;
-	unsigned int		index;
 	unsigned int		count;
 };
 
@@ -93,6 +91,12 @@ static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
 	return container_of(bus->sysdata, struct vmd_dev, sysdata);
 }
 
+static inline unsigned int index_from_irqs(struct vmd_dev *vmd,
+					   struct vmd_irq_list *irqs)
+{
+	return irqs - vmd->irqs;
+}
+
 /*
  * Drivers managing a device in a VMD domain allocate their own IRQs as before,
  * but the MSI entry for the hardware it's driving will be programmed with a
@@ -105,9 +109,11 @@ static void vmd_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
 {
 	struct vmd_irq *vmdirq = data->chip_data;
 	struct vmd_irq_list *irq = vmdirq->irq;
+	struct vmd_dev *vmd = irq_data_get_irq_handler_data(data);
 
 	msg->address_hi = MSI_ADDR_BASE_HI;
-	msg->address_lo = MSI_ADDR_BASE_LO | MSI_ADDR_DEST_ID(irq->index);
+	msg->address_lo = MSI_ADDR_BASE_LO |
+			  MSI_ADDR_DEST_ID(index_from_irqs(vmd, irq));
 	msg->data = 0;
 }
 
@@ -196,6 +202,7 @@ static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info,
 	struct msi_desc *desc = arg->desc;
 	struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(desc)->bus);
 	struct vmd_irq *vmdirq = kzalloc(sizeof(*vmdirq), GFP_KERNEL);
+	unsigned int index, vector;
 
 	if (!vmdirq)
 		return -ENOMEM;
@@ -203,10 +210,10 @@ static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info,
 	INIT_LIST_HEAD(&vmdirq->node);
 	vmdirq->irq = vmd_next_irq(vmd, desc);
 	vmdirq->virq = virq;
+	index = index_from_irqs(vmd, vmdirq->irq);
+	vector = pci_irq_vector(vmd->dev, index);
 
-	irq_domain_set_info(domain, virq,
-			    pci_irq_vector(vmd->dev, vmdirq->irq->index),
-			    info->chip, vmdirq,
+	irq_domain_set_info(domain, virq, vector, info->chip, vmdirq,
 			    handle_untracked_irq, vmd, NULL);
 	return 0;
 }
@@ -689,7 +696,6 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
 
 	for (i = 0; i < vmd->msix_count; i++) {
 		INIT_LIST_HEAD(&vmd->irqs[i].irq_list);
-		vmd->irqs[i].index = i;
 		err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i),
 				       vmd_irq, 0, "vmd", &vmd->irqs[i]);
 		if (err)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFCv2 3/3] pci/vmd: Create irq map for irq nodes
  2016-09-02 17:53 [RFCv2 0/3] vmd irq list shortening, map allocation Jon Derrick
  2016-09-02 17:53 ` [RFCv2 1/3] vmd: eliminate vmd_vector member from list type Jon Derrick
  2016-09-02 17:53 ` [RFCv2 2/3] vmd: eliminate index member from irq list Jon Derrick
@ 2016-09-02 17:53 ` Jon Derrick
  2016-09-13 20:57 ` [RFCv2 0/3] vmd irq list shortening, map allocation Bjorn Helgaas
  2016-09-14 20:25 ` Bjorn Helgaas
  4 siblings, 0 replies; 8+ messages in thread
From: Jon Derrick @ 2016-09-02 17:53 UTC (permalink / raw)
  To: helgaas; +Cc: Jon Derrick, keith.busch, linux-pci

This patch creates an irq map per VMD device msix vector (irq list), and
maps vmdirq nodes into the irq map. The goal is to be able to get a
reference for all vmd_irqs in the vmd_irq_list within a single page.

Each vmd_irq is tracked with the id allocator, and if the id allocation
fails (if many devices are connected), it will fall back to a normal
kzalloc.

Indexing/traversing will still be managed with list primitives, because
we still need the rcu protection.

Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 arch/x86/pci/vmd.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 59 insertions(+), 10 deletions(-)

diff --git a/arch/x86/pci/vmd.c b/arch/x86/pci/vmd.c
index aa8d74e..c25fb46 100644
--- a/arch/x86/pci/vmd.c
+++ b/arch/x86/pci/vmd.c
@@ -21,6 +21,7 @@
 #include <linux/pci.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
+#include <linux/idr.h>
 
 #include <asm/irqdomain.h>
 #include <asm/device.h>
@@ -41,8 +42,9 @@ static DEFINE_RAW_SPINLOCK(list_lock);
  * @node:	list item for parent traversal.
  * @rcu:	RCU callback item for freeing.
  * @irq:	back pointer to parent.
- * @enabled:	true if driver enabled IRQ
  * @virq:	the virtual IRQ value provided to the requesting driver.
+ * @instance:	ida instance which is the mapping index in the irq map
+ * @enabled:	true if driver enabled IRQ
  *
  * Every MSI/MSI-X IRQ requested for a device in a VMD domain will be mapped to
  * a VMD IRQ using this structure.
@@ -51,8 +53,10 @@ struct vmd_irq {
 	struct list_head	node;
 	struct rcu_head		rcu;
 	struct vmd_irq_list	*irq;
-	bool			enabled;
 	unsigned int		virq;
+	int			instance;
+	bool			enabled;
+	u8			__pad[8];
 };
 
 /**
@@ -74,6 +78,9 @@ struct vmd_dev {
 
 	int msix_count;
 	struct vmd_irq_list	*irqs;
+	struct vmd_irq		*irq_map;
+	struct ida		*map_idas;
+#define VMD_IRQS_PER_MAP (PAGE_SIZE / sizeof(struct vmd_irq))
 
 	struct pci_sysdata	sysdata;
 	struct resource		resources[3];
@@ -96,6 +103,11 @@ static inline unsigned int index_from_irqs(struct vmd_dev *vmd,
 {
 	return irqs - vmd->irqs;
 }
+static inline unsigned int index_from_vmd_irq(struct vmd_dev *vmd,
+					      struct vmd_irq *vmd_irq)
+{
+	return ((void *)vmd_irq - (void *)vmd->irq_map) / PAGE_SIZE;
+}
 
 /*
  * Drivers managing a device in a VMD domain allocate their own IRQs as before,
@@ -201,16 +213,31 @@ static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info,
 {
 	struct msi_desc *desc = arg->desc;
 	struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(desc)->bus);
-	struct vmd_irq *vmdirq = kzalloc(sizeof(*vmdirq), GFP_KERNEL);
+	struct vmd_irq *vmdirq;
+	struct vmd_irq_list *irq;
 	unsigned int index, vector;
-
-	if (!vmdirq)
-		return -ENOMEM;
+	int instance;
+
+	irq = vmd_next_irq(vmd, desc);
+	index = index_from_irqs(vmd, irq);
+	instance = ida_simple_get(&vmd->map_idas[index], 0, VMD_IRQS_PER_MAP,
+				  GFP_KERNEL);
+	if (instance < 0) {
+		vmdirq = kzalloc(sizeof(*vmdirq), GFP_KERNEL);
+		if (!vmdirq) {
+			BUG_ON(1);
+			return -ENOMEM;
+		}
+	} else {
+		struct vmd_irq *base = (void *)vmd->irq_map + index * PAGE_SIZE;
+		vmdirq = &base[instance];
+		memset(vmdirq, 0, sizeof(*vmdirq));
+	}
 
 	INIT_LIST_HEAD(&vmdirq->node);
-	vmdirq->irq = vmd_next_irq(vmd, desc);
+	vmdirq->irq = irq;
 	vmdirq->virq = virq;
-	index = index_from_irqs(vmd, vmdirq->irq);
+	vmdirq->instance = instance;
 	vector = pci_irq_vector(vmd->dev, index);
 
 	irq_domain_set_info(domain, virq, vector, info->chip, vmdirq,
@@ -221,6 +248,7 @@ static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info,
 static void vmd_msi_free(struct irq_domain *domain,
 			struct msi_domain_info *info, unsigned int virq)
 {
+	struct vmd_dev *vmd = irq_get_handler_data(virq);
 	struct vmd_irq *vmdirq = irq_get_chip_data(virq);
 	unsigned long flags;
 
@@ -229,9 +257,15 @@ static void vmd_msi_free(struct irq_domain *domain,
 	/* XXX: Potential optimization to rebalance */
 	raw_spin_lock_irqsave(&list_lock, flags);
 	vmdirq->irq->count--;
-	raw_spin_unlock_irqrestore(&list_lock, flags);
 
-	kfree_rcu(vmdirq, rcu);
+	if (vmdirq->instance < 0) {
+		raw_spin_unlock_irqrestore(&list_lock, flags);
+		kfree_rcu(vmdirq, rcu);
+	} else {
+		unsigned int index = index_from_vmd_irq(vmd, vmdirq);
+		ida_simple_remove(&vmd->map_idas[index], vmdirq->instance);
+		raw_spin_unlock_irqrestore(&list_lock, flags);
+	}
 }
 
 static int vmd_msi_prepare(struct irq_domain *domain, struct device *dev,
@@ -694,8 +728,19 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
 	if (!vmd->irqs)
 		return -ENOMEM;
 
+	/* devm_ doesn't provide the PAGE_SIZE alignment we want */
+	vmd->irq_map = kcalloc(vmd->msix_count, PAGE_SIZE, GFP_KERNEL);
+	if (!vmd->irq_map)
+		return -ENOMEM;
+
+	vmd->map_idas = devm_kcalloc(&dev->dev, vmd->msix_count,
+				     sizeof(*vmd->map_idas), GFP_KERNEL);
+	if (!vmd->map_idas)
+		return -ENOMEM;
+
 	for (i = 0; i < vmd->msix_count; i++) {
 		INIT_LIST_HEAD(&vmd->irqs[i].irq_list);
+		ida_init(&vmd->map_idas[i]);
 		err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i),
 				       vmd_irq, 0, "vmd", &vmd->irqs[i]);
 		if (err)
@@ -716,12 +761,16 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
 static void vmd_remove(struct pci_dev *dev)
 {
 	struct vmd_dev *vmd = pci_get_drvdata(dev);
+	int i;
 
 	vmd_detach_resources(vmd);
 	pci_set_drvdata(dev, NULL);
 	sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
 	pci_stop_root_bus(vmd->bus);
 	pci_remove_root_bus(vmd->bus);
+	for (i = 0; i < vmd->msix_count; i++)
+		ida_destroy(&vmd->map_idas[i]);
+	kfree(vmd->irq_map);
 	vmd_teardown_dma_ops(vmd);
 	irq_domain_remove(vmd->irq_domain);
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFCv2 0/3] vmd irq list shortening, map allocation
  2016-09-02 17:53 [RFCv2 0/3] vmd irq list shortening, map allocation Jon Derrick
                   ` (2 preceding siblings ...)
  2016-09-02 17:53 ` [RFCv2 3/3] pci/vmd: Create irq map for irq nodes Jon Derrick
@ 2016-09-13 20:57 ` Bjorn Helgaas
  2016-09-13 22:16   ` Busch, Keith
  2016-09-14 20:25 ` Bjorn Helgaas
  4 siblings, 1 reply; 8+ messages in thread
From: Bjorn Helgaas @ 2016-09-13 20:57 UTC (permalink / raw)
  To: Jon Derrick; +Cc: keith.busch, linux-pci

On Fri, Sep 02, 2016 at 11:53:03AM -0600, Jon Derrick wrote:
> V2:
> Added a map for vmd irqs to attempt to allocate all vmd irqs within an
> irq list into a single page. Once we start getting many devices
> sharing the irq in an irq list, this may help the list traversal latency.
> 
> V1:
> Couple of RFC patches here. I don't really notice a positive benefit but
> it does reduce the struct size of vmd_irq_list and hopefully we gain some
> cache benefits from that.
> 
> Both are based on:
> https://patchwork.kernel.org/patch/9304179/
> https://patchwork.kernel.org/patch/9304181/
> 
> Jon Derrick (3):
>   vmd: eliminate vmd_vector member from list type
>   vmd: eliminate index member from irq list
>   pci/vmd: Create irq map for irq nodes
> 
>  arch/x86/pci/vmd.c | 94 ++++++++++++++++++++++++++++++++++++++++++------------
>  1 file changed, 73 insertions(+), 21 deletions(-)

These look OK to me, so if Keith acks them I'll merge them.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [RFCv2 0/3] vmd irq list shortening, map allocation
  2016-09-13 20:57 ` [RFCv2 0/3] vmd irq list shortening, map allocation Bjorn Helgaas
@ 2016-09-13 22:16   ` Busch, Keith
  2016-09-14 14:44     ` Jon Derrick
  0 siblings, 1 reply; 8+ messages in thread
From: Busch, Keith @ 2016-09-13 22:16 UTC (permalink / raw)
  To: Bjorn Helgaas, Derrick, Jonathan; +Cc: linux-pci

Sorry for the delay. I've been looking into these new PCI IRQ API's and fou=
nd an unrelated issue that I stuck myself with working through with Christo=
ph and Thomas. :)

Patches 1/3 and 2/3 look good to me.

I'm not convinced 3/3 is an improvement. At the very least, we don't want t=
o BUG_ON from a failed kmalloc when we can return an appropriate error inst=
ead. I'll take a closer look at 3/3 and get back to Jon with more feedback.



-----Original Message-----=09
From: Bjorn Helgaas [mailto:helgaas@kernel.org]=20
Sent: Tuesday, September 13, 2016 2:58 PM
To: Derrick, Jonathan <jonathan.derrick@intel.com>
Cc: Busch, Keith <keith.busch@intel.com>; linux-pci@vger.kernel.org
Subject: Re: [RFCv2 0/3] vmd irq list shortening, map allocation

On Fri, Sep 02, 2016 at 11:53:03AM -0600, Jon Derrick wrote:
> V2:
> Added a map for vmd irqs to attempt to allocate all vmd irqs within an
> irq list into a single page. Once we start getting many devices
> sharing the irq in an irq list, this may help the list traversal latency.
>=20
> V1:
> Couple of RFC patches here. I don't really notice a positive benefit but
> it does reduce the struct size of vmd_irq_list and hopefully we gain some
> cache benefits from that.
>=20
> Both are based on:
> https://patchwork.kernel.org/patch/9304179/
> https://patchwork.kernel.org/patch/9304181/
>=20
> Jon Derrick (3):
>   vmd: eliminate vmd_vector member from list type
>   vmd: eliminate index member from irq list
>   pci/vmd: Create irq map for irq nodes
>=20
>  arch/x86/pci/vmd.c | 94 ++++++++++++++++++++++++++++++++++++++++++------=
------
>  1 file changed, 73 insertions(+), 21 deletions(-)

These look OK to me, so if Keith acks them I'll merge them.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFCv2 0/3] vmd irq list shortening, map allocation
  2016-09-13 22:16   ` Busch, Keith
@ 2016-09-14 14:44     ` Jon Derrick
  0 siblings, 0 replies; 8+ messages in thread
From: Jon Derrick @ 2016-09-14 14:44 UTC (permalink / raw)
  To: Busch, Keith; +Cc: Bjorn Helgaas, linux-pci

Hi Keith,

Thanks for the review

On Tue, Sep 13, 2016 at 04:16:19PM -0600, Busch, Keith wrote:
> Sorry for the delay. I've been looking into these new PCI IRQ API's and found an unrelated issue that I stuck myself with working through with Christoph and Thomas. :)
> 
> Patches 1/3 and 2/3 look good to me.
> 
> I'm not convinced 3/3 is an improvement. At the very least, we don't want to BUG_ON from a failed kmalloc when we can return an appropriate error instead. I'll take a closer look at 3/3 and get back to Jon with more feedback.
> 
Yes the BUG_ON was a mistake I would have removed for the next rev. But I agree with you where I am not convinced either. I have been trying to put together a test vehicle to prove that this is actually an improvement, but I have not had much luck yet.

Also, from what I can tell as well, nobody is using ida for indexing into a map, so that would be converted to the bitmap api instead.

I'll follow up in a (long) while when I get the test case figured out and some real results.

If it's not clear yet, 3/3 can be killed :)

> 
> 
> -----Original Message-----	
> From: Bjorn Helgaas [mailto:helgaas@kernel.org] 
> Sent: Tuesday, September 13, 2016 2:58 PM
> To: Derrick, Jonathan <jonathan.derrick@intel.com>
> Cc: Busch, Keith <keith.busch@intel.com>; linux-pci@vger.kernel.org
> Subject: Re: [RFCv2 0/3] vmd irq list shortening, map allocation
> 
> On Fri, Sep 02, 2016 at 11:53:03AM -0600, Jon Derrick wrote:
> > V2:
> > Added a map for vmd irqs to attempt to allocate all vmd irqs within an
> > irq list into a single page. Once we start getting many devices
> > sharing the irq in an irq list, this may help the list traversal latency.
> > 
> > V1:
> > Couple of RFC patches here. I don't really notice a positive benefit but
> > it does reduce the struct size of vmd_irq_list and hopefully we gain some
> > cache benefits from that.
> > 
> > Both are based on:
> > https://patchwork.kernel.org/patch/9304179/
> > https://patchwork.kernel.org/patch/9304181/
> > 
> > Jon Derrick (3):
> >   vmd: eliminate vmd_vector member from list type
> >   vmd: eliminate index member from irq list
> >   pci/vmd: Create irq map for irq nodes
> > 
> >  arch/x86/pci/vmd.c | 94 ++++++++++++++++++++++++++++++++++++++++++------------
> >  1 file changed, 73 insertions(+), 21 deletions(-)
> 
> These look OK to me, so if Keith acks them I'll merge them.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFCv2 0/3] vmd irq list shortening, map allocation
  2016-09-02 17:53 [RFCv2 0/3] vmd irq list shortening, map allocation Jon Derrick
                   ` (3 preceding siblings ...)
  2016-09-13 20:57 ` [RFCv2 0/3] vmd irq list shortening, map allocation Bjorn Helgaas
@ 2016-09-14 20:25 ` Bjorn Helgaas
  4 siblings, 0 replies; 8+ messages in thread
From: Bjorn Helgaas @ 2016-09-14 20:25 UTC (permalink / raw)
  To: Jon Derrick; +Cc: keith.busch, linux-pci

On Fri, Sep 02, 2016 at 11:53:03AM -0600, Jon Derrick wrote:
> V2:
> Added a map for vmd irqs to attempt to allocate all vmd irqs within an
> irq list into a single page. Once we start getting many devices
> sharing the irq in an irq list, this may help the list traversal latency.
> 
> V1:
> Couple of RFC patches here. I don't really notice a positive benefit but
> it does reduce the struct size of vmd_irq_list and hopefully we gain some
> cache benefits from that.
> 
> Both are based on:
> https://patchwork.kernel.org/patch/9304179/
> https://patchwork.kernel.org/patch/9304181/
> 
> Jon Derrick (3):
>   vmd: eliminate vmd_vector member from list type
>   vmd: eliminate index member from irq list
>   pci/vmd: Create irq map for irq nodes

I converted Keith's "looks good to me" to acks on 1/3 and 2/3, applied them
to pci/host-vmd for v4.9, and dropped 3/3 for now.  Thanks, guys!

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-09-14 20:25 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-02 17:53 [RFCv2 0/3] vmd irq list shortening, map allocation Jon Derrick
2016-09-02 17:53 ` [RFCv2 1/3] vmd: eliminate vmd_vector member from list type Jon Derrick
2016-09-02 17:53 ` [RFCv2 2/3] vmd: eliminate index member from irq list Jon Derrick
2016-09-02 17:53 ` [RFCv2 3/3] pci/vmd: Create irq map for irq nodes Jon Derrick
2016-09-13 20:57 ` [RFCv2 0/3] vmd irq list shortening, map allocation Bjorn Helgaas
2016-09-13 22:16   ` Busch, Keith
2016-09-14 14:44     ` Jon Derrick
2016-09-14 20:25 ` Bjorn Helgaas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.