xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] x86/vMSI-X: misc improvements
@ 2016-06-08 12:48 Jan Beulich
  2016-06-08 12:52 ` [PATCH 1/4] x86/vMSI-X: defer intercept handler registration Jan Beulich
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Jan Beulich @ 2016-06-08 12:48 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper

1: defer intercept handler registration
2: drop list lock
3: drop pci_msix_get_table_len()
4: use generic intercept handler in place of MMIO one

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/4] x86/vMSI-X: defer intercept handler registration
  2016-06-08 12:48 [PATCH 0/4] x86/vMSI-X: misc improvements Jan Beulich
@ 2016-06-08 12:52 ` Jan Beulich
  2016-06-17 16:13   ` Konrad Rzeszutek Wilk
  2016-06-21 17:11   ` Andrew Cooper
  2016-06-08 12:53 ` [PATCH 2/4] x86/vMSI-X: drop list lock Jan Beulich
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 13+ messages in thread
From: Jan Beulich @ 2016-06-08 12:52 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper

[-- Attachment #1: Type: text/plain, Size: 1516 bytes --]

There's no point in registering the internal MSI-X table intercept
functions on all domains - it is sufficient to do so once a domain gets
an MSI-X capable device assigned.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -644,8 +644,6 @@ int hvm_domain_initialise(struct domain
 
     rtc_init(d);
 
-    msixtbl_init(d);
-
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
     if ( hvm_tsc_scaling_supported )
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -553,7 +553,8 @@ found:
 
 void msixtbl_init(struct domain *d)
 {
-    if ( !has_vlapic(d) )
+    if ( !has_hvm_container_domain(d) || !has_vlapic(d) ||
+         d->arch.hvm_domain.msixtbl_list.next )
         return;
 
     INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
@@ -567,7 +568,7 @@ void msixtbl_pt_cleanup(struct domain *d
     struct msixtbl_entry *entry, *temp;
     unsigned long flags;
 
-    if ( !has_vlapic(d) )
+    if ( !d->arch.hvm_domain.msixtbl_list.next )
         return;
 
     /* msixtbl_list_lock must be acquired with irq_disabled for check_lock() */
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1380,6 +1380,9 @@ static int assign_device(struct domain *
         goto done;
     }
 
+    if ( pdev->msix )
+        msixtbl_init(d);
+
     pdev->fault.count = 0;
 
     if ( (rc = hd->platform_ops->assign_device(d, devfn, pci_to_dev(pdev), flag)) )




[-- Attachment #2: x86-vMSI-X-defer-init.patch --]
[-- Type: text/plain, Size: 1562 bytes --]

x86/vMSI-X: defer intercept handler registration

There's no point in registering the internal MSI-X table intercept
functions on all domains - it is sufficient to do so once a domain gets
an MSI-X capable device assigned.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -644,8 +644,6 @@ int hvm_domain_initialise(struct domain
 
     rtc_init(d);
 
-    msixtbl_init(d);
-
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
     if ( hvm_tsc_scaling_supported )
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -553,7 +553,8 @@ found:
 
 void msixtbl_init(struct domain *d)
 {
-    if ( !has_vlapic(d) )
+    if ( !has_hvm_container_domain(d) || !has_vlapic(d) ||
+         d->arch.hvm_domain.msixtbl_list.next )
         return;
 
     INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
@@ -567,7 +568,7 @@ void msixtbl_pt_cleanup(struct domain *d
     struct msixtbl_entry *entry, *temp;
     unsigned long flags;
 
-    if ( !has_vlapic(d) )
+    if ( !d->arch.hvm_domain.msixtbl_list.next )
         return;
 
     /* msixtbl_list_lock must be acquired with irq_disabled for check_lock() */
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1380,6 +1380,9 @@ static int assign_device(struct domain *
         goto done;
     }
 
+    if ( pdev->msix )
+        msixtbl_init(d);
+
     pdev->fault.count = 0;
 
     if ( (rc = hd->platform_ops->assign_device(d, devfn, pci_to_dev(pdev), flag)) )

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/4] x86/vMSI-X: drop list lock
  2016-06-08 12:48 [PATCH 0/4] x86/vMSI-X: misc improvements Jan Beulich
  2016-06-08 12:52 ` [PATCH 1/4] x86/vMSI-X: defer intercept handler registration Jan Beulich
@ 2016-06-08 12:53 ` Jan Beulich
  2016-06-21 17:26   ` Andrew Cooper
  2016-06-08 12:54 ` [PATCH 3/4] x86/vMSI-X: drop pci_msix_get_table_len() Jan Beulich
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2016-06-08 12:53 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper

[-- Attachment #1: Type: text/plain, Size: 2872 bytes --]

msixtbl_pt_{,un}register() already run with both the PCI devices lock
and the domain event lock held, so there's no need for another lock.
Just to be on the safe side, acquire the domain event lock in the
cleanup function (albeit I don't think this is strictly necessary).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -468,8 +468,6 @@ int msixtbl_pt_register(struct domain *d
 
     pdev = msi_desc->dev;
 
-    spin_lock(&d->arch.hvm_domain.msixtbl_list_lock);
-
     list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
         if ( pdev == entry->pdev )
             goto found;
@@ -480,7 +478,6 @@ int msixtbl_pt_register(struct domain *d
 
 found:
     atomic_inc(&entry->refcnt);
-    spin_unlock(&d->arch.hvm_domain.msixtbl_list_lock);
     r = 0;
 
 out:
@@ -530,15 +527,10 @@ void msixtbl_pt_unregister(struct domain
 
     pdev = msi_desc->dev;
 
-    spin_lock(&d->arch.hvm_domain.msixtbl_list_lock);
-
     list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
         if ( pdev == entry->pdev )
             goto found;
 
-    spin_unlock(&d->arch.hvm_domain.msixtbl_list_lock);
-
-
 out:
     spin_unlock_irq(&irq_desc->lock);
     return;
@@ -547,7 +539,6 @@ found:
     if ( !atomic_dec_and_test(&entry->refcnt) )
         del_msixtbl_entry(entry);
 
-    spin_unlock(&d->arch.hvm_domain.msixtbl_list_lock);
     spin_unlock_irq(&irq_desc->lock);
 }
 
@@ -558,7 +549,6 @@ void msixtbl_init(struct domain *d)
         return;
 
     INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
-    spin_lock_init(&d->arch.hvm_domain.msixtbl_list_lock);
 
     register_mmio_handler(d, &msixtbl_mmio_ops);
 }
@@ -566,21 +556,17 @@ void msixtbl_init(struct domain *d)
 void msixtbl_pt_cleanup(struct domain *d)
 {
     struct msixtbl_entry *entry, *temp;
-    unsigned long flags;
 
     if ( !d->arch.hvm_domain.msixtbl_list.next )
         return;
 
-    /* msixtbl_list_lock must be acquired with irq_disabled for check_lock() */
-    local_irq_save(flags); 
-    spin_lock(&d->arch.hvm_domain.msixtbl_list_lock);
+    spin_lock(&d->event_lock);
 
     list_for_each_entry_safe( entry, temp,
                               &d->arch.hvm_domain.msixtbl_list, list )
         del_msixtbl_entry(entry);
 
-    spin_unlock(&d->arch.hvm_domain.msixtbl_list_lock);
-    local_irq_restore(flags);
+    spin_unlock(&d->event_lock);
 }
 
 void msix_write_completion(struct vcpu *v)
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -124,7 +124,6 @@ struct hvm_domain {
 
     /* hypervisor intercepted msix table */
     struct list_head       msixtbl_list;
-    spinlock_t             msixtbl_list_lock;
 
     struct viridian_domain viridian;
 




[-- Attachment #2: x86-vMSI-X-drop-list-lock.patch --]
[-- Type: text/plain, Size: 2896 bytes --]

x86/vMSI-X: drop list lock

msixtbl_pt_{,un}register() already run with both the PCI devices lock
and the domain event lock held, so there's no need for another lock.
Just to be on the safe side, acquire the domain event lock in the
cleanup function (albeit I don't think this is strictly necessary).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -468,8 +468,6 @@ int msixtbl_pt_register(struct domain *d
 
     pdev = msi_desc->dev;
 
-    spin_lock(&d->arch.hvm_domain.msixtbl_list_lock);
-
     list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
         if ( pdev == entry->pdev )
             goto found;
@@ -480,7 +478,6 @@ int msixtbl_pt_register(struct domain *d
 
 found:
     atomic_inc(&entry->refcnt);
-    spin_unlock(&d->arch.hvm_domain.msixtbl_list_lock);
     r = 0;
 
 out:
@@ -530,15 +527,10 @@ void msixtbl_pt_unregister(struct domain
 
     pdev = msi_desc->dev;
 
-    spin_lock(&d->arch.hvm_domain.msixtbl_list_lock);
-
     list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
         if ( pdev == entry->pdev )
             goto found;
 
-    spin_unlock(&d->arch.hvm_domain.msixtbl_list_lock);
-
-
 out:
     spin_unlock_irq(&irq_desc->lock);
     return;
@@ -547,7 +539,6 @@ found:
     if ( !atomic_dec_and_test(&entry->refcnt) )
         del_msixtbl_entry(entry);
 
-    spin_unlock(&d->arch.hvm_domain.msixtbl_list_lock);
     spin_unlock_irq(&irq_desc->lock);
 }
 
@@ -558,7 +549,6 @@ void msixtbl_init(struct domain *d)
         return;
 
     INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
-    spin_lock_init(&d->arch.hvm_domain.msixtbl_list_lock);
 
     register_mmio_handler(d, &msixtbl_mmio_ops);
 }
@@ -566,21 +556,17 @@ void msixtbl_init(struct domain *d)
 void msixtbl_pt_cleanup(struct domain *d)
 {
     struct msixtbl_entry *entry, *temp;
-    unsigned long flags;
 
     if ( !d->arch.hvm_domain.msixtbl_list.next )
         return;
 
-    /* msixtbl_list_lock must be acquired with irq_disabled for check_lock() */
-    local_irq_save(flags); 
-    spin_lock(&d->arch.hvm_domain.msixtbl_list_lock);
+    spin_lock(&d->event_lock);
 
     list_for_each_entry_safe( entry, temp,
                               &d->arch.hvm_domain.msixtbl_list, list )
         del_msixtbl_entry(entry);
 
-    spin_unlock(&d->arch.hvm_domain.msixtbl_list_lock);
-    local_irq_restore(flags);
+    spin_unlock(&d->event_lock);
 }
 
 void msix_write_completion(struct vcpu *v)
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -124,7 +124,6 @@ struct hvm_domain {
 
     /* hypervisor intercepted msix table */
     struct list_head       msixtbl_list;
-    spinlock_t             msixtbl_list_lock;
 
     struct viridian_domain viridian;
 

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 3/4] x86/vMSI-X: drop pci_msix_get_table_len()
  2016-06-08 12:48 [PATCH 0/4] x86/vMSI-X: misc improvements Jan Beulich
  2016-06-08 12:52 ` [PATCH 1/4] x86/vMSI-X: defer intercept handler registration Jan Beulich
  2016-06-08 12:53 ` [PATCH 2/4] x86/vMSI-X: drop list lock Jan Beulich
@ 2016-06-08 12:54 ` Jan Beulich
  2016-06-21 17:27   ` Andrew Cooper
  2016-06-08 12:54 ` [PATCH 4/4] x86/vMSI-X: use generic intercept handler in place of MMIO one Jan Beulich
  2016-06-17  8:20 ` Ping: [PATCH 0/4] x86/vMSI-X: misc improvements Jan Beulich
  4 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2016-06-08 12:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper

[-- Attachment #1: Type: text/plain, Size: 1734 bytes --]

We can calculate the needed value at the single use site more easily.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -411,7 +411,7 @@ static void add_msixtbl_entry(struct dom
     INIT_RCU_HEAD(&entry->rcu);
     atomic_set(&entry->refcnt, 0);
 
-    entry->table_len = pci_msix_get_table_len(pdev);
+    entry->table_len = pdev->msix->nr_entries * PCI_MSIX_ENTRY_SIZE;
     entry->pdev = pdev;
     entry->gtable = (unsigned long) gtable;
 
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -1463,27 +1463,6 @@ int pci_restore_msi_state(struct pci_dev
     return 0;
 }
 
-unsigned int pci_msix_get_table_len(struct pci_dev *pdev)
-{
-    int pos;
-    u16 control, seg = pdev->seg;
-    u8 bus, slot, func;
-    unsigned int len;
-
-    bus = pdev->bus;
-    slot = PCI_SLOT(pdev->devfn);
-    func = PCI_FUNC(pdev->devfn);
-
-    pos = pci_find_cap_offset(seg, bus, slot, func, PCI_CAP_ID_MSIX);
-    if ( !pos || !use_msi )
-        return 0;
-
-    control = pci_conf_read16(seg, bus, slot, func, msix_control_reg(pos));
-    len = msix_table_size(control) * PCI_MSIX_ENTRY_SIZE;
-
-    return len;
-}
-
 static int msi_cpu_callback(
     struct notifier_block *nfb, unsigned long action, void *hcpu)
 {
--- a/xen/include/asm-x86/msi.h
+++ b/xen/include/asm-x86/msi.h
@@ -91,8 +91,6 @@ extern void teardown_msi_irq(int irq);
 extern int msi_free_vector(struct msi_desc *entry);
 extern int pci_restore_msi_state(struct pci_dev *pdev);
 
-extern unsigned int pci_msix_get_table_len(struct pci_dev *pdev);
-
 struct msi_desc {
 	struct msi_attrib {
 		__u8	type;		/* {0: unused, 5h:MSI, 11h:MSI-X} */




[-- Attachment #2: x86-vMSI-X-table-len.patch --]
[-- Type: text/plain, Size: 1773 bytes --]

x86/vMSI-X: drop pci_msix_get_table_len()

We can calculate the needed value at the single use site more easily.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -411,7 +411,7 @@ static void add_msixtbl_entry(struct dom
     INIT_RCU_HEAD(&entry->rcu);
     atomic_set(&entry->refcnt, 0);
 
-    entry->table_len = pci_msix_get_table_len(pdev);
+    entry->table_len = pdev->msix->nr_entries * PCI_MSIX_ENTRY_SIZE;
     entry->pdev = pdev;
     entry->gtable = (unsigned long) gtable;
 
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -1463,27 +1463,6 @@ int pci_restore_msi_state(struct pci_dev
     return 0;
 }
 
-unsigned int pci_msix_get_table_len(struct pci_dev *pdev)
-{
-    int pos;
-    u16 control, seg = pdev->seg;
-    u8 bus, slot, func;
-    unsigned int len;
-
-    bus = pdev->bus;
-    slot = PCI_SLOT(pdev->devfn);
-    func = PCI_FUNC(pdev->devfn);
-
-    pos = pci_find_cap_offset(seg, bus, slot, func, PCI_CAP_ID_MSIX);
-    if ( !pos || !use_msi )
-        return 0;
-
-    control = pci_conf_read16(seg, bus, slot, func, msix_control_reg(pos));
-    len = msix_table_size(control) * PCI_MSIX_ENTRY_SIZE;
-
-    return len;
-}
-
 static int msi_cpu_callback(
     struct notifier_block *nfb, unsigned long action, void *hcpu)
 {
--- a/xen/include/asm-x86/msi.h
+++ b/xen/include/asm-x86/msi.h
@@ -91,8 +91,6 @@ extern void teardown_msi_irq(int irq);
 extern int msi_free_vector(struct msi_desc *entry);
 extern int pci_restore_msi_state(struct pci_dev *pdev);
 
-extern unsigned int pci_msix_get_table_len(struct pci_dev *pdev);
-
 struct msi_desc {
 	struct msi_attrib {
 		__u8	type;		/* {0: unused, 5h:MSI, 11h:MSI-X} */

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 4/4] x86/vMSI-X: use generic intercept handler in place of MMIO one
  2016-06-08 12:48 [PATCH 0/4] x86/vMSI-X: misc improvements Jan Beulich
                   ` (2 preceding siblings ...)
  2016-06-08 12:54 ` [PATCH 3/4] x86/vMSI-X: drop pci_msix_get_table_len() Jan Beulich
@ 2016-06-08 12:54 ` Jan Beulich
  2016-06-13  8:36   ` Paul Durrant
  2016-06-21 17:33   ` Andrew Cooper
  2016-06-17  8:20 ` Ping: [PATCH 0/4] x86/vMSI-X: misc improvements Jan Beulich
  4 siblings, 2 replies; 13+ messages in thread
From: Jan Beulich @ 2016-06-08 12:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant

[-- Attachment #1: Type: text/plain, Size: 4249 bytes --]

This allows us to see the full ioreq without having to peek into state
which is supposedly private to the emulation framework.

Suggested-by: Paul Durrant <Paul.Durrant@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -199,9 +199,8 @@ static struct msi_desc *msixtbl_addr_to_
     return NULL;
 }
 
-static int msixtbl_read(
-    struct vcpu *v, unsigned long address,
-    unsigned int len, unsigned long *pval)
+static int msixtbl_read(const struct hvm_io_handler *handler,
+                        uint64_t address, uint32_t len, uint64_t *pval)
 {
     unsigned long offset;
     struct msixtbl_entry *entry;
@@ -213,7 +212,7 @@ static int msixtbl_read(
 
     rcu_read_lock(&msixtbl_rcu_lock);
 
-    entry = msixtbl_find_entry(v, address);
+    entry = msixtbl_find_entry(current, address);
     if ( !entry )
         goto out;
     offset = address & (PCI_MSIX_ENTRY_SIZE - 1);
@@ -333,23 +332,29 @@ out:
     return r;
 }
 
-static int msixtbl_range(struct vcpu *v, unsigned long addr)
+static int _msixtbl_write(const struct hvm_io_handler *handler,
+                         uint64_t address, uint32_t len, uint64_t val)
 {
+    return msixtbl_write(current, address, len, val);
+}
+
+static bool_t msixtbl_range(const struct hvm_io_handler *handler,
+                            const ioreq_t *r)
+{
+    struct vcpu *curr = current;
+    unsigned long addr = r->addr;
     const struct msi_desc *desc;
-    const ioreq_t *r;
+
+    ASSERT(r->type == IOREQ_TYPE_COPY);
 
     rcu_read_lock(&msixtbl_rcu_lock);
-    desc = msixtbl_addr_to_desc(msixtbl_find_entry(v, addr), addr);
+    desc = msixtbl_addr_to_desc(msixtbl_find_entry(curr, addr), addr);
     rcu_read_unlock(&msixtbl_rcu_lock);
 
     if ( desc )
         return 1;
 
-    r = &v->arch.hvm_vcpu.hvm_io.io_req;
-    if ( r->state != STATE_IOREQ_READY || r->addr != addr )
-        return 0;
-    ASSERT(r->type == IOREQ_TYPE_COPY);
-    if ( r->dir == IOREQ_WRITE )
+    if ( r->state == STATE_IOREQ_READY && r->dir == IOREQ_WRITE )
     {
         unsigned int size = r->size;
 
@@ -368,8 +373,8 @@ static int msixtbl_range(struct vcpu *v,
                   PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET) &&
                  !(data & PCI_MSIX_VECTOR_BITMASK) )
             {
-                v->arch.hvm_vcpu.hvm_io.msix_snoop_address = addr;
-                v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa = 0;
+                curr->arch.hvm_vcpu.hvm_io.msix_snoop_address = addr;
+                curr->arch.hvm_vcpu.hvm_io.msix_snoop_gpa = 0;
             }
         }
         else if ( (size == 4 || size == 8) &&
@@ -386,9 +391,9 @@ static int msixtbl_range(struct vcpu *v,
             BUILD_BUG_ON((PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET + 4) &
                          (PCI_MSIX_ENTRY_SIZE - 1));
 
-            v->arch.hvm_vcpu.hvm_io.msix_snoop_address =
+            curr->arch.hvm_vcpu.hvm_io.msix_snoop_address =
                 addr + size * r->count - 4;
-            v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa =
+            curr->arch.hvm_vcpu.hvm_io.msix_snoop_gpa =
                 r->data + size * r->count - 4;
         }
     }
@@ -396,10 +401,10 @@ static int msixtbl_range(struct vcpu *v,
     return 0;
 }
 
-static const struct hvm_mmio_ops msixtbl_mmio_ops = {
-    .check = msixtbl_range,
+static const struct hvm_io_ops msixtbl_mmio_ops = {
+    .accept = msixtbl_range,
     .read = msixtbl_read,
-    .write = msixtbl_write
+    .write = _msixtbl_write
 };
 
 static void add_msixtbl_entry(struct domain *d,
@@ -544,13 +549,20 @@ found:
 
 void msixtbl_init(struct domain *d)
 {
+    struct hvm_io_handler *handler;
+
     if ( !has_hvm_container_domain(d) || !has_vlapic(d) ||
          d->arch.hvm_domain.msixtbl_list.next )
         return;
 
     INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
 
-    register_mmio_handler(d, &msixtbl_mmio_ops);
+    handler = hvm_next_io_handler(d);
+    if ( handler )
+    {
+        handler->type = IOREQ_TYPE_COPY;
+        handler->ops = &msixtbl_mmio_ops;
+    }
 }
 
 void msixtbl_pt_cleanup(struct domain *d)



[-- Attachment #2: x86-vMSI-X-distinct-handler.patch --]
[-- Type: text/plain, Size: 4311 bytes --]

x86/vMSI-X: use generic intercept handler in place of MMIO one

This allows us to see the full ioreq without having to peek into state
which is supposedly private to the emulation framework.

Suggested-by: Paul Durrant <Paul.Durrant@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -199,9 +199,8 @@ static struct msi_desc *msixtbl_addr_to_
     return NULL;
 }
 
-static int msixtbl_read(
-    struct vcpu *v, unsigned long address,
-    unsigned int len, unsigned long *pval)
+static int msixtbl_read(const struct hvm_io_handler *handler,
+                        uint64_t address, uint32_t len, uint64_t *pval)
 {
     unsigned long offset;
     struct msixtbl_entry *entry;
@@ -213,7 +212,7 @@ static int msixtbl_read(
 
     rcu_read_lock(&msixtbl_rcu_lock);
 
-    entry = msixtbl_find_entry(v, address);
+    entry = msixtbl_find_entry(current, address);
     if ( !entry )
         goto out;
     offset = address & (PCI_MSIX_ENTRY_SIZE - 1);
@@ -333,23 +332,29 @@ out:
     return r;
 }
 
-static int msixtbl_range(struct vcpu *v, unsigned long addr)
+static int _msixtbl_write(const struct hvm_io_handler *handler,
+                         uint64_t address, uint32_t len, uint64_t val)
 {
+    return msixtbl_write(current, address, len, val);
+}
+
+static bool_t msixtbl_range(const struct hvm_io_handler *handler,
+                            const ioreq_t *r)
+{
+    struct vcpu *curr = current;
+    unsigned long addr = r->addr;
     const struct msi_desc *desc;
-    const ioreq_t *r;
+
+    ASSERT(r->type == IOREQ_TYPE_COPY);
 
     rcu_read_lock(&msixtbl_rcu_lock);
-    desc = msixtbl_addr_to_desc(msixtbl_find_entry(v, addr), addr);
+    desc = msixtbl_addr_to_desc(msixtbl_find_entry(curr, addr), addr);
     rcu_read_unlock(&msixtbl_rcu_lock);
 
     if ( desc )
         return 1;
 
-    r = &v->arch.hvm_vcpu.hvm_io.io_req;
-    if ( r->state != STATE_IOREQ_READY || r->addr != addr )
-        return 0;
-    ASSERT(r->type == IOREQ_TYPE_COPY);
-    if ( r->dir == IOREQ_WRITE )
+    if ( r->state == STATE_IOREQ_READY && r->dir == IOREQ_WRITE )
     {
         unsigned int size = r->size;
 
@@ -368,8 +373,8 @@ static int msixtbl_range(struct vcpu *v,
                   PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET) &&
                  !(data & PCI_MSIX_VECTOR_BITMASK) )
             {
-                v->arch.hvm_vcpu.hvm_io.msix_snoop_address = addr;
-                v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa = 0;
+                curr->arch.hvm_vcpu.hvm_io.msix_snoop_address = addr;
+                curr->arch.hvm_vcpu.hvm_io.msix_snoop_gpa = 0;
             }
         }
         else if ( (size == 4 || size == 8) &&
@@ -386,9 +391,9 @@ static int msixtbl_range(struct vcpu *v,
             BUILD_BUG_ON((PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET + 4) &
                          (PCI_MSIX_ENTRY_SIZE - 1));
 
-            v->arch.hvm_vcpu.hvm_io.msix_snoop_address =
+            curr->arch.hvm_vcpu.hvm_io.msix_snoop_address =
                 addr + size * r->count - 4;
-            v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa =
+            curr->arch.hvm_vcpu.hvm_io.msix_snoop_gpa =
                 r->data + size * r->count - 4;
         }
     }
@@ -396,10 +401,10 @@ static int msixtbl_range(struct vcpu *v,
     return 0;
 }
 
-static const struct hvm_mmio_ops msixtbl_mmio_ops = {
-    .check = msixtbl_range,
+static const struct hvm_io_ops msixtbl_mmio_ops = {
+    .accept = msixtbl_range,
     .read = msixtbl_read,
-    .write = msixtbl_write
+    .write = _msixtbl_write
 };
 
 static void add_msixtbl_entry(struct domain *d,
@@ -544,13 +549,20 @@ found:
 
 void msixtbl_init(struct domain *d)
 {
+    struct hvm_io_handler *handler;
+
     if ( !has_hvm_container_domain(d) || !has_vlapic(d) ||
          d->arch.hvm_domain.msixtbl_list.next )
         return;
 
     INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
 
-    register_mmio_handler(d, &msixtbl_mmio_ops);
+    handler = hvm_next_io_handler(d);
+    if ( handler )
+    {
+        handler->type = IOREQ_TYPE_COPY;
+        handler->ops = &msixtbl_mmio_ops;
+    }
 }
 
 void msixtbl_pt_cleanup(struct domain *d)

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] x86/vMSI-X: use generic intercept handler in place of MMIO one
  2016-06-08 12:54 ` [PATCH 4/4] x86/vMSI-X: use generic intercept handler in place of MMIO one Jan Beulich
@ 2016-06-13  8:36   ` Paul Durrant
  2016-06-21 17:33   ` Andrew Cooper
  1 sibling, 0 replies; 13+ messages in thread
From: Paul Durrant @ 2016-06-13  8:36 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Andrew Cooper

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 08 June 2016 13:55
> To: xen-devel
> Cc: Andrew Cooper; Paul Durrant
> Subject: [PATCH 4/4] x86/vMSI-X: use generic intercept handler in place of
> MMIO one
> 
> This allows us to see the full ioreq without having to peek into state
> which is supposedly private to the emulation framework.
> 
> Suggested-by: Paul Durrant <Paul.Durrant@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> 
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -199,9 +199,8 @@ static struct msi_desc *msixtbl_addr_to_
>      return NULL;
>  }
> 
> -static int msixtbl_read(
> -    struct vcpu *v, unsigned long address,
> -    unsigned int len, unsigned long *pval)
> +static int msixtbl_read(const struct hvm_io_handler *handler,
> +                        uint64_t address, uint32_t len, uint64_t *pval)
>  {
>      unsigned long offset;
>      struct msixtbl_entry *entry;
> @@ -213,7 +212,7 @@ static int msixtbl_read(
> 
>      rcu_read_lock(&msixtbl_rcu_lock);
> 
> -    entry = msixtbl_find_entry(v, address);
> +    entry = msixtbl_find_entry(current, address);
>      if ( !entry )
>          goto out;
>      offset = address & (PCI_MSIX_ENTRY_SIZE - 1);
> @@ -333,23 +332,29 @@ out:
>      return r;
>  }
> 
> -static int msixtbl_range(struct vcpu *v, unsigned long addr)
> +static int _msixtbl_write(const struct hvm_io_handler *handler,
> +                         uint64_t address, uint32_t len, uint64_t val)
>  {
> +    return msixtbl_write(current, address, len, val);
> +}
> +
> +static bool_t msixtbl_range(const struct hvm_io_handler *handler,
> +                            const ioreq_t *r)
> +{
> +    struct vcpu *curr = current;
> +    unsigned long addr = r->addr;
>      const struct msi_desc *desc;
> -    const ioreq_t *r;
> +
> +    ASSERT(r->type == IOREQ_TYPE_COPY);
> 
>      rcu_read_lock(&msixtbl_rcu_lock);
> -    desc = msixtbl_addr_to_desc(msixtbl_find_entry(v, addr), addr);
> +    desc = msixtbl_addr_to_desc(msixtbl_find_entry(curr, addr), addr);
>      rcu_read_unlock(&msixtbl_rcu_lock);
> 
>      if ( desc )
>          return 1;
> 
> -    r = &v->arch.hvm_vcpu.hvm_io.io_req;
> -    if ( r->state != STATE_IOREQ_READY || r->addr != addr )
> -        return 0;
> -    ASSERT(r->type == IOREQ_TYPE_COPY);
> -    if ( r->dir == IOREQ_WRITE )
> +    if ( r->state == STATE_IOREQ_READY && r->dir == IOREQ_WRITE )
>      {
>          unsigned int size = r->size;
> 
> @@ -368,8 +373,8 @@ static int msixtbl_range(struct vcpu *v,
>                    PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET) &&
>                   !(data & PCI_MSIX_VECTOR_BITMASK) )
>              {
> -                v->arch.hvm_vcpu.hvm_io.msix_snoop_address = addr;
> -                v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa = 0;
> +                curr->arch.hvm_vcpu.hvm_io.msix_snoop_address = addr;
> +                curr->arch.hvm_vcpu.hvm_io.msix_snoop_gpa = 0;
>              }
>          }
>          else if ( (size == 4 || size == 8) &&
> @@ -386,9 +391,9 @@ static int msixtbl_range(struct vcpu *v,
>              BUILD_BUG_ON((PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET + 4) &
>                           (PCI_MSIX_ENTRY_SIZE - 1));
> 
> -            v->arch.hvm_vcpu.hvm_io.msix_snoop_address =
> +            curr->arch.hvm_vcpu.hvm_io.msix_snoop_address =
>                  addr + size * r->count - 4;
> -            v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa =
> +            curr->arch.hvm_vcpu.hvm_io.msix_snoop_gpa =
>                  r->data + size * r->count - 4;
>          }
>      }
> @@ -396,10 +401,10 @@ static int msixtbl_range(struct vcpu *v,
>      return 0;
>  }
> 
> -static const struct hvm_mmio_ops msixtbl_mmio_ops = {
> -    .check = msixtbl_range,
> +static const struct hvm_io_ops msixtbl_mmio_ops = {
> +    .accept = msixtbl_range,
>      .read = msixtbl_read,
> -    .write = msixtbl_write
> +    .write = _msixtbl_write
>  };
> 
>  static void add_msixtbl_entry(struct domain *d,
> @@ -544,13 +549,20 @@ found:
> 
>  void msixtbl_init(struct domain *d)
>  {
> +    struct hvm_io_handler *handler;
> +
>      if ( !has_hvm_container_domain(d) || !has_vlapic(d) ||
>           d->arch.hvm_domain.msixtbl_list.next )
>          return;
> 
>      INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
> 
> -    register_mmio_handler(d, &msixtbl_mmio_ops);
> +    handler = hvm_next_io_handler(d);
> +    if ( handler )
> +    {
> +        handler->type = IOREQ_TYPE_COPY;
> +        handler->ops = &msixtbl_mmio_ops;
> +    }
>  }
> 
>  void msixtbl_pt_cleanup(struct domain *d)
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Ping: [PATCH 0/4] x86/vMSI-X: misc improvements
  2016-06-08 12:48 [PATCH 0/4] x86/vMSI-X: misc improvements Jan Beulich
                   ` (3 preceding siblings ...)
  2016-06-08 12:54 ` [PATCH 4/4] x86/vMSI-X: use generic intercept handler in place of MMIO one Jan Beulich
@ 2016-06-17  8:20 ` Jan Beulich
  4 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2016-06-17  8:20 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel

>>> On 08.06.16 at 14:48, <JBeulich@suse.com> wrote:
> 1: defer intercept handler registration
> 2: drop list lock
> 3: drop pci_msix_get_table_len()
> 4: use generic intercept handler in place of MMIO one
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] x86/vMSI-X: defer intercept handler registration
  2016-06-08 12:52 ` [PATCH 1/4] x86/vMSI-X: defer intercept handler registration Jan Beulich
@ 2016-06-17 16:13   ` Konrad Rzeszutek Wilk
  2016-06-17 16:38     ` Jan Beulich
  2016-06-21 17:11   ` Andrew Cooper
  1 sibling, 1 reply; 13+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-06-17 16:13 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper

On Wed, Jun 08, 2016 at 06:52:58AM -0600, Jan Beulich wrote:
> There's no point in registering the internal MSI-X table intercept
> functions on all domains - it is sufficient to do so once a domain gets
> an MSI-X capable device assigned.

I think this will break on SR-IOV devices that are created and we
had not setup MCFG.

There is this Intel board (which I have) where the MCFG is setup
only via ACPI and in the past we had issues - where we never
detect that the VF had MSI-X - b/c we could not access the
configuration registers past 0xff.

I am pretty sure that Linux now uploads the MCFG data during
bootup from the ACPI AML code, but earlier kernels may not.

And that would mean the device would try to use MSI-X while
code would be !pdev->msix.

Hmm, thought I wonder - with all those improvements of capturing
the MSI-X and validating it in the hypervisor - would this
even matter?

Let me stash an 82576 card in that box and see what happens with
Xen 4.7.

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -644,8 +644,6 @@ int hvm_domain_initialise(struct domain
>  
>      rtc_init(d);
>  
> -    msixtbl_init(d);
> -
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
>  
>      if ( hvm_tsc_scaling_supported )
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -553,7 +553,8 @@ found:
>  
>  void msixtbl_init(struct domain *d)
>  {
> -    if ( !has_vlapic(d) )
> +    if ( !has_hvm_container_domain(d) || !has_vlapic(d) ||
> +         d->arch.hvm_domain.msixtbl_list.next )
>          return;
>  
>      INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
> @@ -567,7 +568,7 @@ void msixtbl_pt_cleanup(struct domain *d
>      struct msixtbl_entry *entry, *temp;
>      unsigned long flags;
>  
> -    if ( !has_vlapic(d) )
> +    if ( !d->arch.hvm_domain.msixtbl_list.next )
>          return;
>  
>      /* msixtbl_list_lock must be acquired with irq_disabled for check_lock() */
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -1380,6 +1380,9 @@ static int assign_device(struct domain *
>          goto done;
>      }
>  
> +    if ( pdev->msix )
> +        msixtbl_init(d);
> +
>      pdev->fault.count = 0;
>  
>      if ( (rc = hd->platform_ops->assign_device(d, devfn, pci_to_dev(pdev), flag)) )
> 
> 
> 

> x86/vMSI-X: defer intercept handler registration
> 
> There's no point in registering the internal MSI-X table intercept
> functions on all domains - it is sufficient to do so once a domain gets
> an MSI-X capable device assigned.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -644,8 +644,6 @@ int hvm_domain_initialise(struct domain
>  
>      rtc_init(d);
>  
> -    msixtbl_init(d);
> -
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
>  
>      if ( hvm_tsc_scaling_supported )
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -553,7 +553,8 @@ found:
>  
>  void msixtbl_init(struct domain *d)
>  {
> -    if ( !has_vlapic(d) )
> +    if ( !has_hvm_container_domain(d) || !has_vlapic(d) ||
> +         d->arch.hvm_domain.msixtbl_list.next )
>          return;
>  
>      INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
> @@ -567,7 +568,7 @@ void msixtbl_pt_cleanup(struct domain *d
>      struct msixtbl_entry *entry, *temp;
>      unsigned long flags;
>  
> -    if ( !has_vlapic(d) )
> +    if ( !d->arch.hvm_domain.msixtbl_list.next )
>          return;
>  
>      /* msixtbl_list_lock must be acquired with irq_disabled for check_lock() */
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -1380,6 +1380,9 @@ static int assign_device(struct domain *
>          goto done;
>      }
>  
> +    if ( pdev->msix )
> +        msixtbl_init(d);
> +
>      pdev->fault.count = 0;
>  
>      if ( (rc = hd->platform_ops->assign_device(d, devfn, pci_to_dev(pdev), flag)) )

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] x86/vMSI-X: defer intercept handler registration
  2016-06-17 16:13   ` Konrad Rzeszutek Wilk
@ 2016-06-17 16:38     ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2016-06-17 16:38 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Andrew Cooper, xen-devel

>>> On 17.06.16 at 18:13, <konrad.wilk@oracle.com> wrote:
> On Wed, Jun 08, 2016 at 06:52:58AM -0600, Jan Beulich wrote:
>> There's no point in registering the internal MSI-X table intercept
>> functions on all domains - it is sufficient to do so once a domain gets
>> an MSI-X capable device assigned.
> 
> I think this will break on SR-IOV devices that are created and we
> had not setup MCFG.

How does MCFG get into the picture here? (And anyway, this series
is about vMSI-X, not host MSI-X.)

> There is this Intel board (which I have) where the MCFG is setup
> only via ACPI and in the past we had issues - where we never
> detect that the VF had MSI-X - b/c we could not access the
> configuration registers past 0xff.

The MSI-X capability necessarily lives in the low 256 bytes. Do you
mean the SR-IOV capability?

> I am pretty sure that Linux now uploads the MCFG data during
> bootup from the ACPI AML code, but earlier kernels may not.
> 
> And that would mean the device would try to use MSI-X while
> code would be !pdev->msix.
> 
> Hmm, thought I wonder - with all those improvements of capturing
> the MSI-X and validating it in the hypervisor - would this
> even matter?
> 
> Let me stash an 82576 card in that box and see what happens with
> Xen 4.7.

Thanks.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] x86/vMSI-X: defer intercept handler registration
  2016-06-08 12:52 ` [PATCH 1/4] x86/vMSI-X: defer intercept handler registration Jan Beulich
  2016-06-17 16:13   ` Konrad Rzeszutek Wilk
@ 2016-06-21 17:11   ` Andrew Cooper
  1 sibling, 0 replies; 13+ messages in thread
From: Andrew Cooper @ 2016-06-21 17:11 UTC (permalink / raw)
  To: Jan Beulich, xen-devel

On 08/06/16 13:52, Jan Beulich wrote:
> There's no point in registering the internal MSI-X table intercept
> functions on all domains - it is sufficient to do so once a domain gets
> an MSI-X capable device assigned.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] x86/vMSI-X: drop list lock
  2016-06-08 12:53 ` [PATCH 2/4] x86/vMSI-X: drop list lock Jan Beulich
@ 2016-06-21 17:26   ` Andrew Cooper
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Cooper @ 2016-06-21 17:26 UTC (permalink / raw)
  To: Jan Beulich, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 426 bytes --]

On 08/06/16 13:53, Jan Beulich wrote:
> msixtbl_pt_{,un}register() already run with both the PCI devices lock
> and the domain event lock held, so there's no need for another lock.
> Just to be on the safe side, acquire the domain event lock in the
> cleanup function (albeit I don't think this is strictly necessary).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

[-- Attachment #1.2: Type: text/html, Size: 962 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/4] x86/vMSI-X: drop pci_msix_get_table_len()
  2016-06-08 12:54 ` [PATCH 3/4] x86/vMSI-X: drop pci_msix_get_table_len() Jan Beulich
@ 2016-06-21 17:27   ` Andrew Cooper
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Cooper @ 2016-06-21 17:27 UTC (permalink / raw)
  To: Jan Beulich, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 217 bytes --]

On 08/06/16 13:54, Jan Beulich wrote:
> We can calculate the needed value at the single use site more easily.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

[-- Attachment #1.2: Type: text/html, Size: 772 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] x86/vMSI-X: use generic intercept handler in place of MMIO one
  2016-06-08 12:54 ` [PATCH 4/4] x86/vMSI-X: use generic intercept handler in place of MMIO one Jan Beulich
  2016-06-13  8:36   ` Paul Durrant
@ 2016-06-21 17:33   ` Andrew Cooper
  1 sibling, 0 replies; 13+ messages in thread
From: Andrew Cooper @ 2016-06-21 17:33 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Paul Durrant

On 08/06/16 13:54, Jan Beulich wrote:
> This allows us to see the full ioreq without having to peek into state
> which is supposedly private to the emulation framework.
>
> Suggested-by: Paul Durrant <Paul.Durrant@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>, with two minor
style corrections.

> @@ -333,23 +332,29 @@ out:
>      return r;
>  }
>  
> -static int msixtbl_range(struct vcpu *v, unsigned long addr)
> +static int _msixtbl_write(const struct hvm_io_handler *handler,
> +                         uint64_t address, uint32_t len, uint64_t val)

Indentation.

> @@ -396,10 +401,10 @@ static int msixtbl_range(struct vcpu *v,
>      return 0;
>  }
>  
> -static const struct hvm_mmio_ops msixtbl_mmio_ops = {
> -    .check = msixtbl_range,
> +static const struct hvm_io_ops msixtbl_mmio_ops = {
> +    .accept = msixtbl_range,
>      .read = msixtbl_read,
> -    .write = msixtbl_write
> +    .write = _msixtbl_write

Trailing comma.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-06-21 17:33 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-08 12:48 [PATCH 0/4] x86/vMSI-X: misc improvements Jan Beulich
2016-06-08 12:52 ` [PATCH 1/4] x86/vMSI-X: defer intercept handler registration Jan Beulich
2016-06-17 16:13   ` Konrad Rzeszutek Wilk
2016-06-17 16:38     ` Jan Beulich
2016-06-21 17:11   ` Andrew Cooper
2016-06-08 12:53 ` [PATCH 2/4] x86/vMSI-X: drop list lock Jan Beulich
2016-06-21 17:26   ` Andrew Cooper
2016-06-08 12:54 ` [PATCH 3/4] x86/vMSI-X: drop pci_msix_get_table_len() Jan Beulich
2016-06-21 17:27   ` Andrew Cooper
2016-06-08 12:54 ` [PATCH 4/4] x86/vMSI-X: use generic intercept handler in place of MMIO one Jan Beulich
2016-06-13  8:36   ` Paul Durrant
2016-06-21 17:33   ` Andrew Cooper
2016-06-17  8:20 ` Ping: [PATCH 0/4] x86/vMSI-X: misc improvements Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).