linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vidya Sagar <vidyas@nvidia.com>
To: Robin Murphy <robin.murphy@arm.com>,
	Jisheng Zhang <Jisheng.Zhang@synaptics.com>,
	Kishon Vijay Abraham I <kishon@ti.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Rob Herring <robh@kernel.org>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Jingoo Han <jingoohan1@gmail.com>,
	Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: <linux-pci@vger.kernel.org>, <linux-omap@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v7 2/2] PCI: dwc: Fix MSI page leakage in suspend/resume
Date: Mon, 12 Oct 2020 21:05:53 +0530	[thread overview]
Message-ID: <1a1c41f1-4085-6b24-adea-d1e0867e7d9d@nvidia.com> (raw)
In-Reply-To: <38a00dde-598f-b6de-ecf3-5d012bd7594a@arm.com>



On 10/12/2020 5:07 PM, Robin Murphy wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 2020-10-09 08:55, Jisheng Zhang wrote:
>> Currently, dw_pcie_msi_init() allocates and maps page for msi, then
>> program the PCIE_MSI_ADDR_LO and PCIE_MSI_ADDR_HI. The Root Complex
>> may lose power during suspend-to-RAM, so when we resume, we want to
>> redo the latter but not the former. If designware based driver (for
>> example, pcie-tegra194.c) calls dw_pcie_msi_init() in resume path, the
>> msi page will be leaked.
>>
>> As pointed out by Rob and Ard, there's no need to allocate a page for
>> the MSI address, we could use an address in the driver data.
>>
>> To avoid map the MSI msg again during resume, we move the map MSI msg
>> from dw_pcie_msi_init() to dw_pcie_host_init().
> 
> You should move the unmap there as well. As soon as you know what the
> relevant address would be if you *were* to do DMA to this location, then
> the exercise is complete. Leaving it mapped for the lifetime of the
> device in order to do not-DMA to it seems questionable (and represents
> technically incorrect API usage without at least a sync_for_cpu call
> before any other access to the data).
> 
> Another point of note is that using streaming DMA mappings at all is a
> bit fragile (regardless of this change). If the host controller itself
> has a limited DMA mask relative to physical memory (which integrators
> still seem to keep doing...) then you could end up punching your MSI
> hole right in the middle of the SWIOTLB bounce buffer, where it's then
> almost *guaranteed* to interfere with real DMA :(
Agree with Robin. Since the MSI page is going to be locked till 
shutdown/reboot, wouldn't it make sense to use dma_alloc_coherent() API?
Also, shouldn't we call dma_set_mask() to limit the address to only 
32-bits so as to enable MSI for even those legacy PCIe devices with only 
32-bit MSI capability?

- Vidya Sagar

> 
> If no DWC users have that problem and the current code is working well
> enough, then I see little reason not to make this partucular change to
> tidy up the implementation, just bear in mind that there's always the
> possibility of having to come back and change it yet again in future to
> make it more robust. I had it in mind that this trick was done with a
> coherent DMA allocation, which would be safe from addressing problems
> but would need to be kept around for the lifetime of the device, but
> maybe that was a different driver :/
> 
> Robin.
> 
>> Suggested-by: Rob Herring <robh@kernel.org>
>> Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
>> Reviewed-by: Rob Herring <robh@kernel.org>
>> ---
>>   drivers/pci/controller/dwc/pci-dra7xx.c       | 18 +++++++++-
>>   .../pci/controller/dwc/pcie-designware-host.c | 33 ++++++++++---------
>>   drivers/pci/controller/dwc/pcie-designware.h  |  2 +-
>>   3 files changed, 36 insertions(+), 17 deletions(-)
>>
>> diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c 
>> b/drivers/pci/controller/dwc/pci-dra7xx.c
>> index 8f0b6d644e4b..6d012d2b1e90 100644
>> --- a/drivers/pci/controller/dwc/pci-dra7xx.c
>> +++ b/drivers/pci/controller/dwc/pci-dra7xx.c
>> @@ -466,7 +466,9 @@ static struct irq_chip 
>> dra7xx_pci_msi_bottom_irq_chip = {
>>   static int dra7xx_pcie_msi_host_init(struct pcie_port *pp)
>>   {
>>       struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +     struct device *dev = pci->dev;
>>       u32 ctrl, num_ctrls;
>> +     int ret;
>>
>>       pp->msi_irq_chip = &dra7xx_pci_msi_bottom_irq_chip;
>>
>> @@ -482,7 +484,21 @@ static int dra7xx_pcie_msi_host_init(struct 
>> pcie_port *pp)
>>                                   ~0);
>>       }
>>
>> -     return dw_pcie_allocate_domains(pp);
>> +     ret = dw_pcie_allocate_domains(pp);
>> +     if (ret)
>> +             return ret;
>> +
>> +     pp->msi_data = dma_map_single_attrs(dev, &pp->msi_msg,
>> +                                        sizeof(pp->msi_msg),
>> +                                        DMA_FROM_DEVICE,
>> +                                        DMA_ATTR_SKIP_CPU_SYNC);
>> +     ret = dma_mapping_error(dev, pp->msi_data);
>> +     if (ret) {
>> +             dev_err(dev, "Failed to map MSI data\n");
>> +             pp->msi_data = 0;
>> +             dw_pcie_free_msi(pp);
>> +     }
>> +     return ret;
>>   }
>>
>>   static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = {
>> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c 
>> b/drivers/pci/controller/dwc/pcie-designware-host.c
>> index d3e9ea11ce9e..d02c7e74738d 100644
>> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
>> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
>> @@ -266,30 +266,23 @@ void dw_pcie_free_msi(struct pcie_port *pp)
>>       irq_domain_remove(pp->msi_domain);
>>       irq_domain_remove(pp->irq_domain);
>>
>> -     if (pp->msi_page)
>> -             __free_page(pp->msi_page);
>> +     if (pp->msi_data) {
>> +             struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> +             struct device *dev = pci->dev;
>> +
>> +             dma_unmap_single_attrs(dev, pp->msi_data, 
>> sizeof(pp->msi_msg),
>> +                                    DMA_FROM_DEVICE, 
>> DMA_ATTR_SKIP_CPU_SYNC);
>> +     }
>>   }
>>
>>   void dw_pcie_msi_init(struct pcie_port *pp)
>>   {
>>       struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> -     struct device *dev = pci->dev;
>> -     u64 msi_target;
>> +     u64 msi_target = (u64)pp->msi_data;
>>
>>       if (!IS_ENABLED(CONFIG_PCI_MSI))
>>               return;
>>
>> -     pp->msi_page = alloc_page(GFP_KERNEL);
>> -     pp->msi_data = dma_map_page(dev, pp->msi_page, 0, PAGE_SIZE,
>> -                                 DMA_FROM_DEVICE);
>> -     if (dma_mapping_error(dev, pp->msi_data)) {
>> -             dev_err(dev, "Failed to map MSI data\n");
>> -             __free_page(pp->msi_page);
>> -             pp->msi_page = NULL;
>> -             return;
>> -     }
>> -     msi_target = (u64)pp->msi_data;
>> -
>>       /* Program the msi_data */
>>       dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_LO, 
>> lower_32_bits(msi_target));
>>       dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_HI, 
>> upper_32_bits(msi_target));
>> @@ -394,6 +387,16 @@ int dw_pcie_host_init(struct pcie_port *pp)
>>                               
>> irq_set_chained_handler_and_data(pp->msi_irq,
>>                                                           
>> dw_chained_msi_isr,
>>                                                           pp);
>> +
>> +                     pp->msi_data = dma_map_single_attrs(pci->dev, 
>> &pp->msi_msg,
>> +                                                   sizeof(pp->msi_msg),
>> +                                                   DMA_FROM_DEVICE,
>> +                                                   
>> DMA_ATTR_SKIP_CPU_SYNC);
>> +                     if (dma_mapping_error(pci->dev, pp->msi_data)) {
>> +                             dev_err(pci->dev, "Failed to map MSI 
>> data\n");
>> +                             pp->msi_data = 0;
>> +                             goto err_free_msi;
>> +                     }
>>               } else {
>>                       ret = pp->ops->msi_host_init(pp);
>>                       if (ret < 0)
>> diff --git a/drivers/pci/controller/dwc/pcie-designware.h 
>> b/drivers/pci/controller/dwc/pcie-designware.h
>> index 97c7063b9e89..9d2f511f13fa 100644
>> --- a/drivers/pci/controller/dwc/pcie-designware.h
>> +++ b/drivers/pci/controller/dwc/pcie-designware.h
>> @@ -190,8 +190,8 @@ struct pcie_port {
>>       int                     msi_irq;
>>       struct irq_domain       *irq_domain;
>>       struct irq_domain       *msi_domain;
>> +     u16                     msi_msg;
>>       dma_addr_t              msi_data;
>> -     struct page             *msi_page;
>>       struct irq_chip         *msi_irq_chip;
>>       u32                     num_vectors;
>>       u32                     irq_mask[MAX_MSI_CTRLS];
>>

  reply	other threads:[~2020-10-12 15:36 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-09  7:53 [PATCH v7 0/2] PCI: dwc: fix two MSI issues Jisheng Zhang
2020-10-09  7:54 ` [PATCH v7 1/2] PCI: dwc: Skip PCIE_MSI_INTR0* programming if MSI is disabled Jisheng Zhang
2020-10-09  7:55 ` [PATCH v7 2/2] PCI: dwc: Fix MSI page leakage in suspend/resume Jisheng Zhang
2020-10-12 11:37   ` Robin Murphy
2020-10-12 15:35     ` Vidya Sagar [this message]
2020-10-14 14:15     ` Rob Herring
2020-10-14 14:49       ` Robin Murphy
2020-10-14 16:52     ` Ard Biesheuvel
2020-10-14 17:08       ` Robin Murphy
2020-10-09  8:49 ` [PATCH v7 0/2] PCI: dwc: fix two MSI issues Lorenzo Pieralisi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1a1c41f1-4085-6b24-adea-d1e0867e7d9d@nvidia.com \
    --to=vidyas@nvidia.com \
    --cc=Jisheng.Zhang@synaptics.com \
    --cc=bhelgaas@google.com \
    --cc=gustavo.pimentel@synopsys.com \
    --cc=jingoohan1@gmail.com \
    --cc=kishon@ti.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-omap@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=robh@kernel.org \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).