From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: [PATCH v4 RFC 0/6] x86/MSI: XSA-120, 126, 128-131 follow-up Date: Mon, 13 Jul 2015 12:42:01 +0100 Message-ID: <55A3C029020000780009024F@mail.emea.novell.com> References: <558839ED02000078000879FE@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="=__Part2317D819.1__=" Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZEc7b-0004xb-W4 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2015 11:42:08 +0000 In-Reply-To: <558839ED02000078000879FE@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel Cc: Andrew Cooper , Keir Fraser , Wei Liu List-Id: xen-devel@lists.xenproject.org This is a MIME message. If you are reading this text, you may want to consider changing to a mail reader or gateway that understands how to properly handle MIME multipart messages. --=__Part2317D819.1__= Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Content-Disposition: inline >>> On 22.06.15 at 16:38, wrote: > Only patches 1, 2, and 6 are really RFC (with some debugging code still > left in), the others (hence v4) have been submitted before.=20 >=20 > 1: PCI: add config space write abstract intercept logic > 2: MSI-X: track host and guest mask-all requests separately > 3: MSI-X: be more careful during teardown > 4: MSI-X: access MSI-X table only after having enabled MSI-X > 5: MSI-X: reduce fiddling with control register during restore > 6: MSI: properly track guest masking requests >=20 > A fundamental question is whether to go without the so far missing > (and more involved) MMCFG intercepts. This largely depends > whether there are any half way recent Dom0 OSes using MMCFG > accesses for the base 256 bytes of PCI config space. Below/attached a 7th patch dealing with MMCFG accesses for devices we know about _before_ Dom0 setting up its mapping of MMCFG space. As noted at its top, this means it still won't work for a number of cases (not mentioned there is the case of Dom0 re-organizing bus numbering in order to e.g. fit in SR-IOV virtual functions). There are two approaches I can see to deal with these cases, neither of which I like particularly: 1) Globally intercept all MMCFG writes (i.e. independent of whether there's anything we actually need to see, like the MSI/ MSI-X mask bits that are of interest here). The part that makes me dislike this model is the increased involvement of the x86 instruction emulator plus the increased chances of us needing to deal with other than just plain stores. 2) Track Dom0 mappings of MMCFG space: We'd need to set up (if not covered by the frame table) struct page_info for all involved pages, and either overload it to store two or three pointers back to L1 page table slots where the respective page is being mapped, or (if two or three mappings per page don't suffice) attach a linked list of such pointers. I think it goes without saying that the complexity of getting this overload right (i.e. namely to not conflict with any existing uses) as well as the need to not lose track of any mappings are the reasons I'm not really keen on this one. Models I considered and found not suitable (and can recall right now): 3) Hunt down Dom0 mappings at the time we learn we need to intercept writes for a particular device. The overhead of having to scan all Dom0 page tables seems prohibitive of such a solution. 4) Initially disallow writes to MMCFG pages globally, granting write permission upon first write access when we can determine that the device doesn't need any "snooping". This wouldn't cope with hotplug (and alike for SR-IOV VFs), as we might end up needing to revoke write access. 5) Mandating Dom0 to use hypervisor managed mappings of MMCFG space. For one this would require all Dom0 kernels to be changed. And then it wouldn't be usable for 32-bit Dom0, as there's no VA space left to place such a mapping, and even if we recycled some of the compat M2P space the range wouldn't be large enough to just cover a single segment's space, not to speak of multiple segments. And introducing a windowing model would make Dom0 changes even more complex (for little benefit, considering that generally we'd expect most people to use 64-bit kernels these days). 6) Mandating at least config space writes to be done via hypercall. This again would require all Dom0 kernels to change. Hence - if anyone has any better idea, please speak up soon. And of course also if anyone has a particular preference between the two presented viable options. Jan TODO: currently dependent upon IOMMU being enabled (due to bus scan only = happening in that case) TODO: SR-IOV (and alike?) devices not visible at time of MMCFG registration= TODO: remove //temp-s --- unstable.orig/xen/arch/x86/mm.c +++ unstable/xen/arch/x86/mm.c @@ -5208,6 +5208,7 @@ int ptwr_do_page_fault(struct vcpu *v, u =20 /* We are looking only for read-only mappings of p.t. pages. */ if ( ((l1e_get_flags(pte) & (_PAGE_PRESENT|_PAGE_RW)) !=3D _PAGE_PRESE= NT) || + rangeset_contains_singleton(mmio_ro_ranges, l1e_get_pfn(pte)) || !get_page_from_pagenr(l1e_get_pfn(pte), d) ) goto bail; =20 @@ -5255,6 +5256,7 @@ int ptwr_do_page_fault(struct vcpu *v, u struct mmio_ro_emulate_ctxt { struct x86_emulate_ctxt ctxt; unsigned long cr2; + unsigned int seg, bdf; }; =20 static int mmio_ro_emulated_read( @@ -5294,6 +5296,50 @@ static const struct x86_emulate_ops mmio .write =3D mmio_ro_emulated_write, }; =20 +static int mmio_intercept_write( + enum x86_segment seg, + unsigned long offset, + void *p_data, + unsigned int bytes, + struct x86_emulate_ctxt *ctxt) +{ + struct mmio_ro_emulate_ctxt *mmio_ctxt =3D + container_of(ctxt, struct mmio_ro_emulate_ctxt, ctxt); + + /* + * Only allow naturally-aligned stores no wider than 4 bytes to the + * original %cr2 address. + */ + if ( ((bytes | offset) & (bytes - 1)) || bytes > 4 || + offset !=3D mmio_ctxt->cr2 ) + { + MEM_LOG("mmio_intercept: bad write (cr2=3D%lx, addr=3D%lx, = bytes=3D%u)", + mmio_ctxt->cr2, offset, bytes); + return X86EMUL_UNHANDLEABLE; + } + + offset &=3D 0xfff; +printk("%04x:%02x:%02x.%u write [%lx] %0*x\n",//temp + mmio_ctxt->seg, PCI_BUS(mmio_ctxt->bdf), PCI_SLOT(mmio_ctxt->bdf), = PCI_FUNC(mmio_ctxt->bdf),//temp + offset, bytes * 2, *(uint32_t*)p_data);//temp + pci_conf_write_intercept(mmio_ctxt->seg, mmio_ctxt->bdf, offset, = bytes, + p_data); +printk("%04x:%02x:%02x.%u write [%lx]=3D%0*x\n",//temp + mmio_ctxt->seg, PCI_BUS(mmio_ctxt->bdf), PCI_SLOT(mmio_ctxt->bdf), = PCI_FUNC(mmio_ctxt->bdf),//temp + offset, bytes * 2, *(uint32_t*)p_data);//temp + pci_mmcfg_write(mmio_ctxt->seg, PCI_BUS(mmio_ctxt->bdf), + PCI_DEVFN2(mmio_ctxt->bdf), offset, bytes, + *(uint32_t *)p_data); + + return X86EMUL_OKAY; +} + +static const struct x86_emulate_ops mmio_intercept_ops =3D { + .read =3D mmio_ro_emulated_read, + .insn_fetch =3D ptwr_emulated_read, + .write =3D mmio_intercept_write, +}; + /* Check if guest is trying to modify a r/o MMIO page. */ int mmio_ro_do_page_fault(struct vcpu *v, unsigned long addr, struct cpu_user_regs *regs) @@ -5308,6 +5354,7 @@ int mmio_ro_do_page_fault(struct vcpu *v .ctxt.swint_emulate =3D x86_swint_emulate_none, .cr2 =3D addr }; + const unsigned long *map; int rc; =20 /* Attempt to read the PTE that maps the VA being accessed. */ @@ -5332,7 +5379,14 @@ int mmio_ro_do_page_fault(struct vcpu *v if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn) ) return 0; =20 - rc =3D x86_emulate(&mmio_ro_ctxt.ctxt, &mmio_ro_emulate_ops); + if ( pci_mmcfg_decode(mfn, &mmio_ro_ctxt.seg, &mmio_ro_ctxt.bdf) && + (map =3D pci_get_intercept_map(mmio_ro_ctxt.seg)) !=3D NULL && + test_bit(mmio_ro_ctxt.bdf, map) && + ((map =3D pci_get_ro_map(mmio_ro_ctxt.seg)) =3D=3D NULL || + !test_bit(mmio_ro_ctxt.bdf, map)) ) + rc =3D x86_emulate(&mmio_ro_ctxt.ctxt, &mmio_intercept_ops); + else + rc =3D x86_emulate(&mmio_ro_ctxt.ctxt, &mmio_ro_emulate_ops); =20 return rc !=3D X86EMUL_UNHANDLEABLE ? EXCRET_fault_fixed : 0; } --- unstable.orig/xen/arch/x86/pci.c +++ unstable/xen/arch/x86/pci.c @@ -75,6 +75,15 @@ int pci_conf_write_intercept(unsigned in struct pci_dev *pdev; int rc =3D 0; =20 + if ( !data ) /* probe */ + { + ASSERT(!~reg && !size); + return pci_find_cap_offset(seg, PCI_BUS(bdf), PCI_SLOT(bdf), + PCI_FUNC(bdf), PCI_CAP_ID_MSI) || + pci_find_cap_offset(seg, PCI_BUS(bdf), PCI_SLOT(bdf), + PCI_FUNC(bdf), PCI_CAP_ID_MSIX); + } + /* * Avoid expensive operations when no hook is going to do anything * for the access anyway. --- unstable.orig/xen/arch/x86/x86_64/mmconfig_64.c +++ unstable/xen/arch/x86/x86_64/mmconfig_64.c @@ -154,10 +154,25 @@ void arch_pci_ro_device(int seg, int bdf } } =20 +static void set_ro(const struct acpi_mcfg_allocation *cfg, + const unsigned long *map) +{ + unsigned int bdf =3D PCI_BDF(cfg->start_bus_number, 0, 0); + unsigned int end =3D PCI_BDF(cfg->end_bus_number, -1, -1); + + if (!map) + return; + + while ((bdf =3D find_next_bit(map, end + 1, bdf)) <=3D end) { + arch_pci_ro_device(cfg->pci_segment, bdf); + if (bdf++ =3D=3D end) + break; + } +} + int pci_mmcfg_arch_enable(unsigned int idx) { const typeof(pci_mmcfg_config[0]) *cfg =3D pci_mmcfg_virt[idx].cfg; - const unsigned long *ro_map =3D pci_get_ro_map(cfg->pci_segment); =20 if (pci_mmcfg_virt[idx].virt) return 0; @@ -169,16 +184,10 @@ int pci_mmcfg_arch_enable(unsigned int i } printk(KERN_INFO "PCI: Using MCFG for segment %04x bus %02x-%02x\n", cfg->pci_segment, cfg->start_bus_number, cfg->end_bus_number); - if (ro_map) { - unsigned int bdf =3D PCI_BDF(cfg->start_bus_number, 0, 0); - unsigned int end =3D PCI_BDF(cfg->end_bus_number, -1, -1); - - while ((bdf =3D find_next_bit(ro_map, end + 1, bdf)) <=3D end) { - arch_pci_ro_device(cfg->pci_segment, bdf); - if (bdf++ =3D=3D end) - break; - } - } + + set_ro(cfg, pci_get_ro_map(cfg->pci_segment)); + set_ro(cfg, pci_get_intercept_map(cfg->pci_segment)); + return 0; } =20 @@ -197,6 +206,32 @@ void pci_mmcfg_arch_disable(unsigned int cfg->pci_segment, cfg->start_bus_number, cfg->end_bus_number); } =20 +bool_t pci_mmcfg_decode(unsigned long mfn, unsigned int *seg, + unsigned int *bdf) +{ + unsigned int idx; + + for (idx =3D 0; idx < pci_mmcfg_config_num; ++idx) { + const struct acpi_mcfg_allocation *cfg =3D pci_mmcfg_virt[idx].cfg= ; + + if (pci_mmcfg_virt[idx].virt && + mfn >=3D PFN_DOWN(cfg->address + (cfg->start_bus_number << = 20)) && + mfn < PFN_DOWN(cfg->address + ((cfg->end_bus_number + 1) << = 20))) { +static unsigned long cnt, thr;//temp + *seg =3D cfg->pci_segment; + *bdf =3D mfn - PFN_DOWN(cfg->address); +if(++cnt > thr) {//temp + thr |=3D cnt; + printk("%lx -> %04x:%02x:%02x.%u\n", mfn, *seg, PCI_BUS(*bdf), PCI_SLOT(*= bdf), PCI_FUNC(*bdf)); +} + return 1; + } + } + +printk("%lx -> ???\n", mfn);//temp + return 0; +} + int __init pci_mmcfg_arch_init(void) { int i; --- unstable.orig/xen/drivers/passthrough/pci.c +++ unstable/xen/drivers/passthrough/pci.c @@ -39,6 +39,7 @@ struct pci_seg { struct list_head alldevs_list; u16 nr; unsigned long *ro_map; + unsigned long *intercept_map; /* bus2bridge_lock protects bus2bridge array */ spinlock_t bus2bridge_lock; #define MAX_BUSES 256 @@ -123,6 +124,13 @@ const unsigned long *pci_get_ro_map(u16=20 return pseg ? pseg->ro_map : NULL; } =20 +const unsigned long *pci_get_intercept_map(u16 seg) +{ + struct pci_seg *pseg =3D get_pseg(seg); + + return pseg ? pseg->intercept_map : NULL; +} + static struct phantom_dev { u16 seg; u8 bus, slot, stride; @@ -284,6 +292,20 @@ static struct pci_dev *alloc_pdev(struct pdev->domain =3D NULL; INIT_LIST_HEAD(&pdev->msi_list); =20 + if ( !pseg->intercept_map && + pci_conf_write_intercept(pseg->nr, PCI_BDF2(bus, devfn), ~0, 0, + NULL) > 0 ) + { + size_t sz =3D BITS_TO_LONGS(PCI_BDF(-1, -1, -1) + 1) * sizeof(long= ); + + pseg->intercept_map =3D vzalloc(sz); + if ( !pseg->intercept_map ) + { + xfree(pdev); + return NULL; + } + } + if ( pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn), PCI_FUNC(devf= n), PCI_CAP_ID_MSIX) ) { @@ -366,6 +388,13 @@ static struct pci_dev *alloc_pdev(struct =20 check_pdev(pdev); =20 + if ( pci_conf_write_intercept(pseg->nr, PCI_BDF2(bus, devfn), ~0, 0, + NULL) > 0 ) + { + set_bit(PCI_BDF2(bus, devfn), pseg->intercept_map); + arch_pci_ro_device(pseg->nr, PCI_BDF2(bus, devfn)); + } + return pdev; } =20 --- unstable.orig/xen/include/asm-x86/pci.h +++ unstable/xen/include/asm-x86/pci.h @@ -20,5 +20,7 @@ int pci_conf_write_intercept(unsigned in uint32_t *data); int pci_msi_conf_write_intercept(struct pci_dev *, unsigned int reg, unsigned int size, uint32_t *data); +bool_t pci_mmcfg_decode(unsigned long mfn, unsigned int *seg, + unsigned int *bdf); =20 #endif /* __X86_PCI_H__ */ --- unstable.orig/xen/include/xen/pci.h +++ unstable/xen/include/xen/pci.h @@ -106,6 +106,7 @@ void setup_hwdom_pci_devices(struct doma int pci_release_devices(struct domain *d); int pci_add_segment(u16 seg); const unsigned long *pci_get_ro_map(u16 seg); +const unsigned long *pci_get_intercept_map(u16 seg); int pci_add_device(u16 seg, u8 bus, u8 devfn, const struct pci_dev_info *, nodeid_t node); int pci_remove_device(u16 seg, u8 bus, u8 devfn); --=__Part2317D819.1__= Content-Type: text/plain; name="x86-PCI-MMCFG-intercept.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="x86-PCI-MMCFG-intercept.patch" =0ATODO: currently dependent upon IOMMU being enabled (due to bus scan = only happening in that case)=0ATODO: SR-IOV (and alike?) devices not = visible at time of MMCFG registration=0ATODO: remove //temp-s=0A=0A--- = unstable.orig/xen/arch/x86/mm.c=0A+++ unstable/xen/arch/x86/mm.c=0A@@ = -5208,6 +5208,7 @@ int ptwr_do_page_fault(struct vcpu *v, u=0A =0A /* = We are looking only for read-only mappings of p.t. pages. */=0A if ( = ((l1e_get_flags(pte) & (_PAGE_PRESENT|_PAGE_RW)) !=3D _PAGE_PRESENT) = ||=0A+ rangeset_contains_singleton(mmio_ro_ranges, l1e_get_pfn(pte)= ) ||=0A !get_page_from_pagenr(l1e_get_pfn(pte), d) )=0A = goto bail;=0A =0A@@ -5255,6 +5256,7 @@ int ptwr_do_page_fault(struct vcpu = *v, u=0A struct mmio_ro_emulate_ctxt {=0A struct x86_emulate_ctxt = ctxt;=0A unsigned long cr2;=0A+ unsigned int seg, bdf;=0A };=0A =0A = static int mmio_ro_emulated_read(=0A@@ -5294,6 +5296,50 @@ static const = struct x86_emulate_ops mmio=0A .write =3D mmio_ro_emulated_write,= =0A };=0A =0A+static int mmio_intercept_write(=0A+ enum x86_segment = seg,=0A+ unsigned long offset,=0A+ void *p_data,=0A+ unsigned int = bytes,=0A+ struct x86_emulate_ctxt *ctxt)=0A+{=0A+ struct mmio_ro_emu= late_ctxt *mmio_ctxt =3D=0A+ container_of(ctxt, struct mmio_ro_emula= te_ctxt, ctxt);=0A+=0A+ /*=0A+ * Only allow naturally-aligned = stores no wider than 4 bytes to the=0A+ * original %cr2 address.=0A+ = */=0A+ if ( ((bytes | offset) & (bytes - 1)) || bytes > 4 ||=0A+ = offset !=3D mmio_ctxt->cr2 )=0A+ {=0A+ MEM_LOG("mmio_intercept= : bad write (cr2=3D%lx, addr=3D%lx, bytes=3D%u)",=0A+ = mmio_ctxt->cr2, offset, bytes);=0A+ return X86EMUL_UNHANDLEABLE;=0A+= }=0A+=0A+ offset &=3D 0xfff;=0A+printk("%04x:%02x:%02x.%u write = [%lx] %0*x\n",//temp=0A+ mmio_ctxt->seg, PCI_BUS(mmio_ctxt->bdf), = PCI_SLOT(mmio_ctxt->bdf), PCI_FUNC(mmio_ctxt->bdf),//temp=0A+ = offset, bytes * 2, *(uint32_t*)p_data);//temp=0A+ pci_conf_write_interce= pt(mmio_ctxt->seg, mmio_ctxt->bdf, offset, bytes,=0A+ = p_data);=0A+printk("%04x:%02x:%02x.%u write [%lx]=3D%0*x\n",//temp= =0A+ mmio_ctxt->seg, PCI_BUS(mmio_ctxt->bdf), PCI_SLOT(mmio_ctxt->bdf= ), PCI_FUNC(mmio_ctxt->bdf),//temp=0A+ offset, bytes * 2, *(uint32_t*= )p_data);//temp=0A+ pci_mmcfg_write(mmio_ctxt->seg, PCI_BUS(mmio_ctxt->b= df),=0A+ PCI_DEVFN2(mmio_ctxt->bdf), offset, bytes,=0A+ = *(uint32_t *)p_data);=0A+=0A+ return X86EMUL_OKAY;=0A= +}=0A+=0A+static const struct x86_emulate_ops mmio_intercept_ops =3D {=0A+ = .read =3D mmio_ro_emulated_read,=0A+ .insn_fetch =3D ptwr_emula= ted_read,=0A+ .write =3D mmio_intercept_write,=0A+};=0A+=0A /* = Check if guest is trying to modify a r/o MMIO page. */=0A int mmio_ro_do_pa= ge_fault(struct vcpu *v, unsigned long addr,=0A = struct cpu_user_regs *regs)=0A@@ -5308,6 +5354,7 @@ int mmio_ro_do_page_fau= lt(struct vcpu *v=0A .ctxt.swint_emulate =3D x86_swint_emulate_none= ,=0A .cr2 =3D addr=0A };=0A+ const unsigned long *map;=0A = int rc;=0A =0A /* Attempt to read the PTE that maps the VA being = accessed. */=0A@@ -5332,7 +5379,14 @@ int mmio_ro_do_page_fault(struct = vcpu *v=0A if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn) )=0A = return 0;=0A =0A- rc =3D x86_emulate(&mmio_ro_ctxt.ctxt, = &mmio_ro_emulate_ops);=0A+ if ( pci_mmcfg_decode(mfn, &mmio_ro_ctxt.seg,= &mmio_ro_ctxt.bdf) &&=0A+ (map =3D pci_get_intercept_map(mmio_ro_c= txt.seg)) !=3D NULL &&=0A+ test_bit(mmio_ro_ctxt.bdf, map) &&=0A+ = ((map =3D pci_get_ro_map(mmio_ro_ctxt.seg)) =3D=3D NULL ||=0A+ = !test_bit(mmio_ro_ctxt.bdf, map)) )=0A+ rc =3D x86_emulate(&mmio= _ro_ctxt.ctxt, &mmio_intercept_ops);=0A+ else=0A+ rc =3D = x86_emulate(&mmio_ro_ctxt.ctxt, &mmio_ro_emulate_ops);=0A =0A return = rc !=3D X86EMUL_UNHANDLEABLE ? EXCRET_fault_fixed : 0;=0A }=0A--- = unstable.orig/xen/arch/x86/pci.c=0A+++ unstable/xen/arch/x86/pci.c=0A@@ = -75,6 +75,15 @@ int pci_conf_write_intercept(unsigned in=0A struct = pci_dev *pdev;=0A int rc =3D 0;=0A =0A+ if ( !data ) /* probe = */=0A+ {=0A+ ASSERT(!~reg && !size);=0A+ return = pci_find_cap_offset(seg, PCI_BUS(bdf), PCI_SLOT(bdf),=0A+ = PCI_FUNC(bdf), PCI_CAP_ID_MSI) ||=0A+ = pci_find_cap_offset(seg, PCI_BUS(bdf), PCI_SLOT(bdf),=0A+ = PCI_FUNC(bdf), PCI_CAP_ID_MSIX);=0A+ }=0A+=0A = /*=0A * Avoid expensive operations when no hook is going to do = anything=0A * for the access anyway.=0A--- unstable.orig/xen/arch/x86/= x86_64/mmconfig_64.c=0A+++ unstable/xen/arch/x86/x86_64/mmconfig_64.c=0A@@ = -154,10 +154,25 @@ void arch_pci_ro_device(int seg, int bdf=0A }=0A = }=0A =0A+static void set_ro(const struct acpi_mcfg_allocation *cfg,=0A+ = const unsigned long *map)=0A+{=0A+ unsigned int bdf =3D = PCI_BDF(cfg->start_bus_number, 0, 0);=0A+ unsigned int end =3D = PCI_BDF(cfg->end_bus_number, -1, -1);=0A+=0A+ if (!map)=0A+ = return;=0A+=0A+ while ((bdf =3D find_next_bit(map, end + 1, bdf)) <=3D = end) {=0A+ arch_pci_ro_device(cfg->pci_segment, bdf);=0A+ if = (bdf++ =3D=3D end)=0A+ break;=0A+ }=0A+}=0A+=0A int = pci_mmcfg_arch_enable(unsigned int idx)=0A {=0A const typeof(pci_mmcfg_= config[0]) *cfg =3D pci_mmcfg_virt[idx].cfg;=0A- const unsigned long = *ro_map =3D pci_get_ro_map(cfg->pci_segment);=0A =0A if (pci_mmcfg_virt= [idx].virt)=0A return 0;=0A@@ -169,16 +184,10 @@ int pci_mmcfg_arch= _enable(unsigned int i=0A }=0A printk(KERN_INFO "PCI: Using MCFG = for segment %04x bus %02x-%02x\n",=0A cfg->pci_segment, = cfg->start_bus_number, cfg->end_bus_number);=0A- if (ro_map) {=0A- = unsigned int bdf =3D PCI_BDF(cfg->start_bus_number, 0, 0);=0A- = unsigned int end =3D PCI_BDF(cfg->end_bus_number, -1, -1);=0A-=0A- = while ((bdf =3D find_next_bit(ro_map, end + 1, bdf)) <=3D end) {=0A- = arch_pci_ro_device(cfg->pci_segment, bdf);=0A- if (bdf++ = =3D=3D end)=0A- break;=0A- }=0A- }=0A+=0A+ = set_ro(cfg, pci_get_ro_map(cfg->pci_segment));=0A+ set_ro(cfg, = pci_get_intercept_map(cfg->pci_segment));=0A+=0A return 0;=0A }=0A = =0A@@ -197,6 +206,32 @@ void pci_mmcfg_arch_disable(unsigned int=0A = cfg->pci_segment, cfg->start_bus_number, cfg->end_bus_number);=0A }=0A = =0A+bool_t pci_mmcfg_decode(unsigned long mfn, unsigned int *seg,=0A+ = unsigned int *bdf)=0A+{=0A+ unsigned int idx;=0A+=0A+ = for (idx =3D 0; idx < pci_mmcfg_config_num; ++idx) {=0A+ const = struct acpi_mcfg_allocation *cfg =3D pci_mmcfg_virt[idx].cfg;=0A+=0A+ = if (pci_mmcfg_virt[idx].virt &&=0A+ mfn >=3D PFN_DOWN(cfg->add= ress + (cfg->start_bus_number << 20)) &&=0A+ mfn < PFN_DOWN(cfg-= >address + ((cfg->end_bus_number + 1) << 20))) {=0A+static unsigned long = cnt, thr;//temp=0A+ *seg =3D cfg->pci_segment;=0A+ = *bdf =3D mfn - PFN_DOWN(cfg->address);=0A+if(++cnt > thr) {//temp=0A+ thr = |=3D cnt;=0A+ printk("%lx -> %04x:%02x:%02x.%u\n", mfn, *seg, PCI_BUS(*bdf)= , PCI_SLOT(*bdf), PCI_FUNC(*bdf));=0A+}=0A+ return 1;=0A+ = }=0A+ }=0A+=0A+printk("%lx -> ???\n", mfn);//temp=0A+ return = 0;=0A+}=0A+=0A int __init pci_mmcfg_arch_init(void)=0A {=0A int = i;=0A--- unstable.orig/xen/drivers/passthrough/pci.c=0A+++ unstable/xen/dri= vers/passthrough/pci.c=0A@@ -39,6 +39,7 @@ struct pci_seg {=0A struct = list_head alldevs_list;=0A u16 nr;=0A unsigned long *ro_map;=0A+ = unsigned long *intercept_map;=0A /* bus2bridge_lock protects = bus2bridge array */=0A spinlock_t bus2bridge_lock;=0A #define = MAX_BUSES 256=0A@@ -123,6 +124,13 @@ const unsigned long *pci_get_ro_map(u1= 6 =0A return pseg ? pseg->ro_map : NULL;=0A }=0A =0A+const unsigned = long *pci_get_intercept_map(u16 seg)=0A+{=0A+ struct pci_seg *pseg =3D = get_pseg(seg);=0A+=0A+ return pseg ? pseg->intercept_map : NULL;=0A+}=0A= +=0A static struct phantom_dev {=0A u16 seg;=0A u8 bus, slot, = stride;=0A@@ -284,6 +292,20 @@ static struct pci_dev *alloc_pdev(struct=0A = pdev->domain =3D NULL;=0A INIT_LIST_HEAD(&pdev->msi_list);=0A =0A+ = if ( !pseg->intercept_map &&=0A+ pci_conf_write_intercept(pseg->= nr, PCI_BDF2(bus, devfn), ~0, 0,=0A+ = NULL) > 0 )=0A+ {=0A+ size_t sz =3D BITS_TO_LONGS(PCI_BDF(-1, = -1, -1) + 1) * sizeof(long);=0A+=0A+ pseg->intercept_map =3D = vzalloc(sz);=0A+ if ( !pseg->intercept_map )=0A+ {=0A+ = xfree(pdev);=0A+ return NULL;=0A+ }=0A+ }=0A+=0A = if ( pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn= ),=0A PCI_CAP_ID_MSIX) )=0A {=0A@@ -366,6 = +388,13 @@ static struct pci_dev *alloc_pdev(struct=0A =0A check_pdev(p= dev);=0A =0A+ if ( pci_conf_write_intercept(pseg->nr, PCI_BDF2(bus, = devfn), ~0, 0,=0A+ NULL) > 0 )=0A+ = {=0A+ set_bit(PCI_BDF2(bus, devfn), pseg->intercept_map);=0A+ = arch_pci_ro_device(pseg->nr, PCI_BDF2(bus, devfn));=0A+ }=0A+=0A = return pdev;=0A }=0A =0A--- unstable.orig/xen/include/asm-x86/pci.h=0A+++ = unstable/xen/include/asm-x86/pci.h=0A@@ -20,5 +20,7 @@ int pci_conf_write_i= ntercept(unsigned in=0A uint32_t *data);=0A = int pci_msi_conf_write_intercept(struct pci_dev *, unsigned int reg,=0A = unsigned int size, uint32_t *data);=0A+bool_t= pci_mmcfg_decode(unsigned long mfn, unsigned int *seg,=0A+ = unsigned int *bdf);=0A =0A #endif /* __X86_PCI_H__ */=0A--- = unstable.orig/xen/include/xen/pci.h=0A+++ unstable/xen/include/xen/pci.h=0A= @@ -106,6 +106,7 @@ void setup_hwdom_pci_devices(struct doma=0A int = pci_release_devices(struct domain *d);=0A int pci_add_segment(u16 seg);=0A = const unsigned long *pci_get_ro_map(u16 seg);=0A+const unsigned long = *pci_get_intercept_map(u16 seg);=0A int pci_add_device(u16 seg, u8 bus, u8 = devfn,=0A const struct pci_dev_info *, nodeid_t = node);=0A int pci_remove_device(u16 seg, u8 bus, u8 devfn);=0A --=__Part2317D819.1__= Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --=__Part2317D819.1__=--