linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* possible dmar_init_reserved_ranges() error
@ 2016-12-19 21:20 Bjorn Helgaas
  2016-12-22 16:27 ` Joerg Roedel
  2016-12-27 23:44 ` Bjorn Helgaas
  0 siblings, 2 replies; 10+ messages in thread
From: Bjorn Helgaas @ 2016-12-19 21:20 UTC (permalink / raw)
  To: David Woodhouse, Joerg Roedel; +Cc: rwright, iommu, linux-pci, linux-kernel

Hi guys,

I have some questions about dmar_init_reserved_ranges().  On systems
where CPU physical address space is not identity-mapped to PCI bus
address space, e.g., where the PCI host bridge windows have _TRA
offsets, I'm not sure we're doing the right thing.

Assume we have a PCI host bridge with _TRA that maps CPU addresses
0x80000000-0x9fffffff to PCI bus addresses 0x00000000-0x1fffffff, with
two PCI devices below it:

  PCI host bridge domain 0000 [bus 00-3f]
  PCI host bridge window [mem 0x80000000-0x9fffffff] (bus 0x00000000-0x1fffffff]
  00:00.0: BAR 0 [mem 0x80000000-0x8ffffffff] (0x00000000-0x0fffffff on bus)
  00:01.0: BAR 0 [mem 0x90000000-0x9ffffffff] (0x10000000-0x1fffffff on bus)

The IOMMU init code in dmar_init_reserved_ranges() reserves the PCI
MMIO space for all devices:

  pci_iommu_init()
    intel_iommu_init()
      dmar_init_reserved_ranges()
        reserve_iova(0x80000000-0x8ffffffff)
        reserve_iova(0x90000000-0x9ffffffff)

This looks odd because we're reserving CPU physical addresses, but
the IOVA space contains *PCI bus* addresses.  On most x86 systems they
would be the same, but not on all.

Assume the driver for 00:00.0 maps a page of main memory for DMA.  It
may receive a dma_addr_t of 0x10000000:

  00:00.0: intel_map_page() returns dma_addr_t 0x10000000
  00:00.0: issues DMA to 0x10000000

What happens here?  The DMA access should go to main memory.  In
conventional PCI it would be a peer-to-peer access to device 00:01.0.
Is there enough PCIe smarts (ACS or something?) to do otherwise?

The dmar_init_reserved_ranges() comment says "Reserve all PCI MMIO to
avoid peer-to-peer access."  Without _TRA, CPU addresses and PCI bus
addresses would be identical, and I think these reserve_iova() calls
*would* prevent this situation.  So maybe we're just missing a
pcibios_resource_to_bus() here?

Bjorn

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: possible dmar_init_reserved_ranges() error
  2016-12-19 21:20 possible dmar_init_reserved_ranges() error Bjorn Helgaas
@ 2016-12-22 16:27 ` Joerg Roedel
  2016-12-22 20:28   ` Bjorn Helgaas
  2016-12-27 23:44 ` Bjorn Helgaas
  1 sibling, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2016-12-22 16:27 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: David Woodhouse, rwright, iommu, linux-pci, linux-kernel

Hi Bjorn,

On Mon, Dec 19, 2016 at 03:20:44PM -0600, Bjorn Helgaas wrote:
> I have some questions about dmar_init_reserved_ranges().  On systems
> where CPU physical address space is not identity-mapped to PCI bus
> address space, e.g., where the PCI host bridge windows have _TRA
> offsets, I'm not sure we're doing the right thing.
> 
> Assume we have a PCI host bridge with _TRA that maps CPU addresses
> 0x80000000-0x9fffffff to PCI bus addresses 0x00000000-0x1fffffff, with
> two PCI devices below it:
> 
>   PCI host bridge domain 0000 [bus 00-3f]
>   PCI host bridge window [mem 0x80000000-0x9fffffff] (bus 0x00000000-0x1fffffff]
>   00:00.0: BAR 0 [mem 0x80000000-0x8ffffffff] (0x00000000-0x0fffffff on bus)
>   00:01.0: BAR 0 [mem 0x90000000-0x9ffffffff] (0x10000000-0x1fffffff on bus)
> 
> The IOMMU init code in dmar_init_reserved_ranges() reserves the PCI
> MMIO space for all devices:
> 
>   pci_iommu_init()
>     intel_iommu_init()
>       dmar_init_reserved_ranges()
>         reserve_iova(0x80000000-0x8ffffffff)
>         reserve_iova(0x90000000-0x9ffffffff)
> 
> This looks odd because we're reserving CPU physical addresses, but
> the IOVA space contains *PCI bus* addresses.  On most x86 systems they
> would be the same, but not on all.

Interesting, I wasn't aware of that. Looks like we are not doing the
right thing in dmar_init_reserved_ranges(). How is that handled without
an IOMMU, when the bus-addresses overlap with ram addresses?
 
> Assume the driver for 00:00.0 maps a page of main memory for DMA.  It
> may receive a dma_addr_t of 0x10000000:
> 
>   00:00.0: intel_map_page() returns dma_addr_t 0x10000000
>   00:00.0: issues DMA to 0x10000000
> 
> What happens here?  The DMA access should go to main memory.  In
> conventional PCI it would be a peer-to-peer access to device 00:01.0.
> Is there enough PCIe smarts (ACS or something?) to do otherwise?

If there is a bridge doing ACS between the devices, the IOMMU will see
the request and re-map it to its RAM address.

> The dmar_init_reserved_ranges() comment says "Reserve all PCI MMIO to
> avoid peer-to-peer access."  Without _TRA, CPU addresses and PCI bus
> addresses would be identical, and I think these reserve_iova() calls
> *would* prevent this situation.  So maybe we're just missing a
> pcibios_resource_to_bus() here?

I'll have a look, the AMD IOMMU driver implements this too, so it needs
also be fixed there. Do you know which x86 systems are configured like
this?


	Joerg

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: possible dmar_init_reserved_ranges() error
  2016-12-22 16:27 ` Joerg Roedel
@ 2016-12-22 20:28   ` Bjorn Helgaas
  2016-12-22 23:32     ` Raj, Ashok
  0 siblings, 1 reply; 10+ messages in thread
From: Bjorn Helgaas @ 2016-12-22 20:28 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: David Woodhouse, rwright, iommu, linux-pci, linux-kernel

On Thu, Dec 22, 2016 at 05:27:14PM +0100, Joerg Roedel wrote:
> Hi Bjorn,
> 
> On Mon, Dec 19, 2016 at 03:20:44PM -0600, Bjorn Helgaas wrote:
> > I have some questions about dmar_init_reserved_ranges().  On systems
> > where CPU physical address space is not identity-mapped to PCI bus
> > address space, e.g., where the PCI host bridge windows have _TRA
> > offsets, I'm not sure we're doing the right thing.
> > 
> > Assume we have a PCI host bridge with _TRA that maps CPU addresses
> > 0x80000000-0x9fffffff to PCI bus addresses 0x00000000-0x1fffffff, with
> > two PCI devices below it:
> > 
> >   PCI host bridge domain 0000 [bus 00-3f]
> >   PCI host bridge window [mem 0x80000000-0x9fffffff] (bus 0x00000000-0x1fffffff]
> >   00:00.0: BAR 0 [mem 0x80000000-0x8ffffffff] (0x00000000-0x0fffffff on bus)
> >   00:01.0: BAR 0 [mem 0x90000000-0x9ffffffff] (0x10000000-0x1fffffff on bus)
> > 
> > The IOMMU init code in dmar_init_reserved_ranges() reserves the PCI
> > MMIO space for all devices:
> > 
> >   pci_iommu_init()
> >     intel_iommu_init()
> >       dmar_init_reserved_ranges()
> >         reserve_iova(0x80000000-0x8ffffffff)
> >         reserve_iova(0x90000000-0x9ffffffff)
> > 
> > This looks odd because we're reserving CPU physical addresses, but
> > the IOVA space contains *PCI bus* addresses.  On most x86 systems they
> > would be the same, but not on all.
> 
> Interesting, I wasn't aware of that. Looks like we are not doing the
> right thing in dmar_init_reserved_ranges(). How is that handled without
> an IOMMU, when the bus-addresses overlap with ram addresses?

I don't know enough about these systems to answer that.  One way would
be to avoid overlaps, e.g., by using bus addresses
0x80000000-0xffffffff and not putting RAM at those addresses.  Or
maybe the host bridge could apply a constant offset to bus addresses
before forwarding transactions up to the sytem bus.

> > Assume the driver for 00:00.0 maps a page of main memory for DMA.  It
> > may receive a dma_addr_t of 0x10000000:
> > 
> >   00:00.0: intel_map_page() returns dma_addr_t 0x10000000
> >   00:00.0: issues DMA to 0x10000000
> > 
> > What happens here?  The DMA access should go to main memory.  In
> > conventional PCI it would be a peer-to-peer access to device 00:01.0.
> > Is there enough PCIe smarts (ACS or something?) to do otherwise?
> 
> If there is a bridge doing ACS between the devices, the IOMMU will see
> the request and re-map it to its RAM address.
> 
> > The dmar_init_reserved_ranges() comment says "Reserve all PCI MMIO to
> > avoid peer-to-peer access."  Without _TRA, CPU addresses and PCI bus
> > addresses would be identical, and I think these reserve_iova() calls
> > *would* prevent this situation.  So maybe we're just missing a
> > pcibios_resource_to_bus() here?
> 
> I'll have a look, the AMD IOMMU driver implements this too, so it needs
> also be fixed there. Do you know which x86 systems are configured like
> this?

http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=b4873931cc8c
added this support, and I'm pretty sure it was tested, but I don't
know what machines it was for.  I know many large ia64 systems use
this _TRA support, but I don't have first-hand knowledge of x86
systems that do.

The untested patch below is what I was thinking for the Intel IOMMU
driver.

Bjorn


commit 529a6db0b0b2ff37a0cdb49d11eee4eb6f960a48
Author: Bjorn Helgaas <bhelgaas@google.com>
Date:   Tue Dec 20 11:08:09 2016 -0600

    iommu/vt-d: Reserve IOVA space for bus address, not CPU address
    
    IOVA space contains bus addresses, not CPU addresses.  On many systems they
    are identical, but PCI host bridges in some systems do apply an address
    offset when forwarding CPU MMIO transactions to PCI.  In ACPI, this is
    expressed as a _TRA offset in the window descriptor.
    
    Convert the PCI resource CPU addresses to PCI bus addresses before
    reserving them in the IOVA space.
    
    Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index c66c273..be78ab7 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1865,6 +1865,7 @@ static struct lock_class_key reserved_rbtree_key;
 static int dmar_init_reserved_ranges(void)
 {
 	struct pci_dev *pdev = NULL;
+	struct pci_bus_region region;
 	struct iova *iova;
 	int i;
 
@@ -1890,9 +1891,11 @@ static int dmar_init_reserved_ranges(void)
 			r = &pdev->resource[i];
 			if (!r->flags || !(r->flags & IORESOURCE_MEM))
 				continue;
+
+			pcibios_resource_to_bus(pdev->bus, &region, r);
 			iova = reserve_iova(&reserved_iova_list,
-					    IOVA_PFN(r->start),
-					    IOVA_PFN(r->end));
+					    IOVA_PFN(region.start),
+					    IOVA_PFN(region.end));
 			if (!iova) {
 				pr_err("Reserve iova failed\n");
 				return -ENODEV;

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: possible dmar_init_reserved_ranges() error
  2016-12-22 20:28   ` Bjorn Helgaas
@ 2016-12-22 23:32     ` Raj, Ashok
  2016-12-22 23:45       ` Raj, Ashok
  0 siblings, 1 reply; 10+ messages in thread
From: Raj, Ashok @ 2016-12-22 23:32 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Joerg Roedel, linux-pci, iommu, David Woodhouse, linux-kernel,
	rwright, ashok.raj

Hi Bjorn

On Thu, Dec 22, 2016 at 02:28:03PM -0600, Bjorn Helgaas wrote:
> On Thu, Dec 22, 2016 at 05:27:14PM +0100, Joerg Roedel wrote:
> > Hi Bjorn,
> > 
> > On Mon, Dec 19, 2016 at 03:20:44PM -0600, Bjorn Helgaas wrote:
> > > I have some questions about dmar_init_reserved_ranges().  On systems
> > > where CPU physical address space is not identity-mapped to PCI bus
> > > address space, e.g., where the PCI host bridge windows have _TRA
> > > offsets, I'm not sure we're doing the right thing.
> > > 
> > > Assume we have a PCI host bridge with _TRA that maps CPU addresses
> > > 0x80000000-0x9fffffff to PCI bus addresses 0x00000000-0x1fffffff, with
> > > two PCI devices below it:

This is the first time I'm hearing about it too!,and tracked it to 
2002, one of Bjorn's patches from past life :-)

> > > 
> > >   PCI host bridge domain 0000 [bus 00-3f]
> > >   PCI host bridge window [mem 0x80000000-0x9fffffff] (bus 0x00000000-0x1fffffff]
> > >   00:00.0: BAR 0 [mem 0x80000000-0x8ffffffff] (0x00000000-0x0fffffff on bus)
> > >   00:01.0: BAR 0 [mem 0x90000000-0x9ffffffff] (0x10000000-0x1fffffff on bus)
> > > 
> > > The IOMMU init code in dmar_init_reserved_ranges() reserves the PCI
> > > MMIO space for all devices:
> > > 
> > >   pci_iommu_init()
> > >     intel_iommu_init()
> > >       dmar_init_reserved_ranges()
> > >         reserve_iova(0x80000000-0x8ffffffff)
> > >         reserve_iova(0x90000000-0x9ffffffff)
> > > 
> > > This looks odd because we're reserving CPU physical addresses, but
> > > the IOVA space contains *PCI bus* addresses.  On most x86 systems they
> > > would be the same, but not on all.
> > 
> > Interesting, I wasn't aware of that. Looks like we are not doing the
> > right thing in dmar_init_reserved_ranges(). How is that handled without
> > an IOMMU, when the bus-addresses overlap with ram addresses?

I'm not sure if there are platforms that i'm aware of that do _TRA. I'm 
checking internally if others have come across something like that.

> 
> I don't know enough about these systems to answer that.  One way would
> be to avoid overlaps, e.g., by using bus addresses
> 0x80000000-0xffffffff and not putting RAM at those addresses.  Or
> maybe the host bridge could apply a constant offset to bus addresses
> before forwarding transactions up to the sytem bus.
> 
> > > Assume the driver for 00:00.0 maps a page of main memory for DMA.  It
> > > may receive a dma_addr_t of 0x10000000:
> > > 
> > >   00:00.0: intel_map_page() returns dma_addr_t 0x10000000
> > >   00:00.0: issues DMA to 0x10000000
> > > 
> > > What happens here?  The DMA access should go to main memory.  In
> > > conventional PCI it would be a peer-to-peer access to device 00:01.0.
> > > Is there enough PCIe smarts (ACS or something?) to do otherwise?
> > 
> > If there is a bridge doing ACS between the devices, the IOMMU will see
> > the request and re-map it to its RAM address.

True, if its all acs enabled, we don't need this, probably true for legacy.
But it doesn't matter in big scheme of things to reserve.

> > 
> > > The dmar_init_reserved_ranges() comment says "Reserve all PCI MMIO to
> > > avoid peer-to-peer access."  Without _TRA, CPU addresses and PCI bus
> > > addresses would be identical, and I think these reserve_iova() calls
> > > *would* prevent this situation.  So maybe we're just missing a
> > > pcibios_resource_to_bus() here?
> > 
> > I'll have a look, the AMD IOMMU driver implements this too, so it needs
> > also be fixed there. Do you know which x86 systems are configured like
> > this?
> 
Let me check and keep you posted if we have such platforms to make sure if
we need this considerations for _TRA.

Cheers,
Ashok

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: possible dmar_init_reserved_ranges() error
  2016-12-22 23:32     ` Raj, Ashok
@ 2016-12-22 23:45       ` Raj, Ashok
  2016-12-23  0:48         ` Bjorn Helgaas
  0 siblings, 1 reply; 10+ messages in thread
From: Raj, Ashok @ 2016-12-22 23:45 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Joerg Roedel, linux-pci, iommu, David Woodhouse, linux-kernel,
	rwright, ashok.raj

Hi Bjorn

None in the platform group say they know about this. So i'm fairly sure
we don't do that on Intel hardware (x86). 

I'm not sure about the usage, it appears maybe it was a hack 
pre-virtualization for some direct access?  (just wild guessing)

On Thu, Dec 22, 2016 at 03:32:38PM -0800, Raj, Ashok wrote:
> Let me check and keep you posted if we have such platforms to make sure if
> we need this considerations for _TRA.

Cheers,
Ashok

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: possible dmar_init_reserved_ranges() error
  2016-12-22 23:45       ` Raj, Ashok
@ 2016-12-23  0:48         ` Bjorn Helgaas
  2016-12-23 10:35           ` Joerg Roedel
  0 siblings, 1 reply; 10+ messages in thread
From: Bjorn Helgaas @ 2016-12-23  0:48 UTC (permalink / raw)
  To: Raj, Ashok
  Cc: Joerg Roedel, linux-pci, iommu, David Woodhouse, linux-kernel, rwright

Hi Ashok,

On Thu, Dec 22, 2016 at 03:45:08PM -0800, Raj, Ashok wrote:
> Hi Bjorn
> 
> None in the platform group say they know about this. So i'm fairly sure
> we don't do that on Intel hardware (x86). 

I'm pretty sure there was once an x86 prototype for which PCI bus
addresses were not identical to CPU physical addresses, but I have no
idea whether it shipped that way.

Even if such a system never shipped, the x86 arch code supports _TRA,
and there's no reason to make the unnecessary assumption in this code
that _TRA is always zero.

If we didn't want to use pcibios_resource_to_bus() here for some
reason, we should at least add a comment about why we think it's OK to
use a CPU physical address as an IOVA.

Bjorn

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: possible dmar_init_reserved_ranges() error
  2016-12-23  0:48         ` Bjorn Helgaas
@ 2016-12-23 10:35           ` Joerg Roedel
  0 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2016-12-23 10:35 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Raj, Ashok, linux-pci, iommu, David Woodhouse, linux-kernel, rwright

On Thu, Dec 22, 2016 at 06:48:01PM -0600, Bjorn Helgaas wrote:
> If we didn't want to use pcibios_resource_to_bus() here for some
> reason, we should at least add a comment about why we think it's OK to
> use a CPU physical address as an IOVA.

Even if there are no such x86 systems out there, I think it doesn't hurt
to handle the possibility correctly in the IOMMU drivers.



	Joerg

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: possible dmar_init_reserved_ranges() error
  2016-12-19 21:20 possible dmar_init_reserved_ranges() error Bjorn Helgaas
  2016-12-22 16:27 ` Joerg Roedel
@ 2016-12-27 23:44 ` Bjorn Helgaas
  2016-12-28  3:21   ` Raj, Ashok
  1 sibling, 1 reply; 10+ messages in thread
From: Bjorn Helgaas @ 2016-12-27 23:44 UTC (permalink / raw)
  To: David Woodhouse, Joerg Roedel, g; +Cc: rwright, iommu, linux-pci, linux-kernel

On Mon, Dec 19, 2016 at 03:20:44PM -0600, Bjorn Helgaas wrote:
> Hi guys,
> 
> I have some questions about dmar_init_reserved_ranges().  On systems
> where CPU physical address space is not identity-mapped to PCI bus
> address space, e.g., where the PCI host bridge windows have _TRA
> offsets, I'm not sure we're doing the right thing.
> 
> Assume we have a PCI host bridge with _TRA that maps CPU addresses
> 0x80000000-0x9fffffff to PCI bus addresses 0x00000000-0x1fffffff, with
> two PCI devices below it:
> 
>   PCI host bridge domain 0000 [bus 00-3f]
>   PCI host bridge window [mem 0x80000000-0x9fffffff] (bus 0x00000000-0x1fffffff]
>   00:00.0: BAR 0 [mem 0x80000000-0x8ffffffff] (0x00000000-0x0fffffff on bus)
>   00:01.0: BAR 0 [mem 0x90000000-0x9ffffffff] (0x10000000-0x1fffffff on bus)
> 
> The IOMMU init code in dmar_init_reserved_ranges() reserves the PCI
> MMIO space for all devices:
> 
>   pci_iommu_init()
>     intel_iommu_init()
>       dmar_init_reserved_ranges()
>         reserve_iova(0x80000000-0x8ffffffff)
>         reserve_iova(0x90000000-0x9ffffffff)
> 
> This looks odd because we're reserving CPU physical addresses, but
> the IOVA space contains *PCI bus* addresses.  On most x86 systems they
> would be the same, but not on all.

While we're looking at this, here's another question.  We do basically
this:

  dmar_init_reserved_ranges()
  {
    ...
    for_each_pci_dev(pdev) {
      for (i = 0; i < PCI_NUM_RESOURCES; i++) {
        r = &pdev->resource[i];
        reserve_iova(r)

But I assume it's possible to have more than one IOTLB in a system,
so you could have some PCI devices under one IOTLB and others under a
different IOTLB.  So it seems like we should reserve only the IOVA
space used by the devices under *this* IOTLB.

Also, we may hot-add a device under the IOTLB, and I don't see where
we reserve the IOVA space it uses.

I think the best thing to do would be to reserve the host bridge
apertures related to each IOTLB.  That would resolve both questions.
It looks like iova_reserve_pci_windows() does this in the
iommu_dma_init_domain() path.

Bjorn

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: possible dmar_init_reserved_ranges() error
  2016-12-27 23:44 ` Bjorn Helgaas
@ 2016-12-28  3:21   ` Raj, Ashok
  2017-01-04 14:39     ` Joerg Roedel
  0 siblings, 1 reply; 10+ messages in thread
From: Raj, Ashok @ 2016-12-28  3:21 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: David Woodhouse, Joerg Roedel, g, linux-pci, iommu, linux-kernel,
	rwright, ashok.raj

Hi Bjorn,

On Tue, Dec 27, 2016 at 05:44:17PM -0600, Bjorn Helgaas wrote:
> 
>   dmar_init_reserved_ranges()
>   {
>     ...
>     for_each_pci_dev(pdev) {
>       for (i = 0; i < PCI_NUM_RESOURCES; i++) {
>         r = &pdev->resource[i];
>         reserve_iova(r)
> 
> But I assume it's possible to have more than one IOTLB in a system,

You meant IOMMU?  

> so you could have some PCI devices under one IOTLB and others under a
> different IOTLB.  So it seems like we should reserve only the IOVA
> space used by the devices under *this* IOTLB.

Yes, it seems we are aggressive reserving all pci devices bars's when 
potentially you need to only reserve ranges for the IOMMU under which 
this pci device exist. We also need to make sure devices under the
INCLUDE_ALL is handled correctly. 
> 
> Also, we may hot-add a device under the IOTLB, and I don't see where
> we reserve the IOVA space it uses.
> 
> I think the best thing to do would be to reserve the host bridge
> apertures related to each IOTLB.  That would resolve both questions.
> It looks like iova_reserve_pci_windows() does this in the
> iommu_dma_init_domain() path.

This sounds reasonable, if we can reserve from the host bridge apertures 
it should take care of hot-plug cases as well, and should simply how the
reservation is made.

Cheers,
Ashok

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: possible dmar_init_reserved_ranges() error
  2016-12-28  3:21   ` Raj, Ashok
@ 2017-01-04 14:39     ` Joerg Roedel
  0 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2017-01-04 14:39 UTC (permalink / raw)
  To: Raj, Ashok
  Cc: Bjorn Helgaas, David Woodhouse, g, linux-pci, iommu,
	linux-kernel, rwright

On Tue, Dec 27, 2016 at 07:21:39PM -0800, Raj, Ashok wrote:
> This sounds reasonable, if we can reserve from the host bridge apertures 
> it should take care of hot-plug cases as well, and should simply how the
> reservation is made.

Agreed, I have this on my todo list already since I converted the AMD
IOMMU driver to the iova allocator. Just reserving the pci windows is
much better than what we are currently doing.


	Joerg

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-01-04 14:39 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-19 21:20 possible dmar_init_reserved_ranges() error Bjorn Helgaas
2016-12-22 16:27 ` Joerg Roedel
2016-12-22 20:28   ` Bjorn Helgaas
2016-12-22 23:32     ` Raj, Ashok
2016-12-22 23:45       ` Raj, Ashok
2016-12-23  0:48         ` Bjorn Helgaas
2016-12-23 10:35           ` Joerg Roedel
2016-12-27 23:44 ` Bjorn Helgaas
2016-12-28  3:21   ` Raj, Ashok
2017-01-04 14:39     ` Joerg Roedel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).