All of lore.kernel.org
 help / color / mirror / Atom feed
From: Bjorn Helgaas <helgaas@kernel.org>
To: Will McVicker <willmcvicker@google.com>
Cc: "Jingoo Han" <jingoohan1@gmail.com>,
	"Gustavo Pimentel" <gustavo.pimentel@synopsys.com>,
	"Lorenzo Pieralisi" <lorenzo.pieralisi@arm.com>,
	"Rob Herring" <robh@kernel.org>,
	"Krzysztof Wilczyński" <kw@linux.com>,
	"Bjorn Helgaas" <bhelgaas@google.com>,
	kernel-team@android.com, "Vidya Sagar" <vidyas@nvidia.com>,
	linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v1 1/1] PCI: dwc: Fix MSI msi_msg dma mapping
Date: Mon, 13 Jun 2022 17:29:48 -0500	[thread overview]
Message-ID: <20220613222948.GA721845@bhelgaas> (raw)
In-Reply-To: <20220525223316.388490-1-willmcvicker@google.com>

On Wed, May 25, 2022 at 10:33:16PM +0000, Will McVicker wrote:
> As of commit 07940c369a6b ("PCI: dwc: Fix MSI page leakage in
> suspend/resume"), the PCIe designware host driver has been using the
> driver data allocation for the msi_msg dma mapping which can result in
> a DMA_MAPPING_ERROR due to the DMA overflow check in
> dma_direct_map_page() when the address is greater than 32-bits (reported
> in [1]). The commit was trying to address a memory leak on
> suspend/resume by moving the MSI mapping to dw_pcie_host_init(), but
> subsequently dropped the page allocation thinking it wasn't needed.
> 
> To fix the DMA mapping issue as well as make msi_msg DMA'able, let's
> switch back to allocating a 32-bit page for the msi_msg. To avoid the
> suspend/resume leak, we can allocate the page in dw_pcie_host_init()
> since that function shouldn't be called during suspend/resume.
> 
> [1] https://lore.kernel.org/all/Yo0soniFborDl7+C@google.com/
> 
> Signed-off-by: Will McVicker <willmcvicker@google.com>

Applied to pci/ctrl/dwc for v5.20, thanks!

> ---
>  drivers/pci/controller/dwc/pcie-designware-host.c | 14 ++++++++------
>  drivers/pci/controller/dwc/pcie-designware.h      |  2 +-
>  2 files changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index 2fa86f32d964..3655c6f88bf1 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -267,8 +267,9 @@ static void dw_pcie_free_msi(struct pcie_port *pp)
>  		struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>  		struct device *dev = pci->dev;
>  
> -		dma_unmap_single_attrs(dev, pp->msi_data, sizeof(pp->msi_msg),
> -				       DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
> +		dma_unmap_page(dev, pp->msi_data, PAGE_SIZE, DMA_FROM_DEVICE);
> +		if (pp->msi_page)
> +			__free_page(pp->msi_page);
>  	}
>  }
>  
> @@ -392,12 +393,13 @@ int dw_pcie_host_init(struct pcie_port *pp)
>  			if (ret)
>  				dev_warn(pci->dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n");
>  
> -			pp->msi_data = dma_map_single_attrs(pci->dev, &pp->msi_msg,
> -						      sizeof(pp->msi_msg),
> -						      DMA_FROM_DEVICE,
> -						      DMA_ATTR_SKIP_CPU_SYNC);
> +			pp->msi_page = alloc_page(GFP_DMA32);
> +			pp->msi_data = dma_map_page(pci->dev, pp->msi_page, 0, PAGE_SIZE,
> +						    DMA_FROM_DEVICE);
>  			if (dma_mapping_error(pci->dev, pp->msi_data)) {
>  				dev_err(pci->dev, "Failed to map MSI data\n");
> +				__free_page(pp->msi_page);
> +				pp->msi_page = NULL;
>  				pp->msi_data = 0;
>  				goto err_free_msi;
>  			}
> diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
> index 7d6e9b7576be..b5f528536358 100644
> --- a/drivers/pci/controller/dwc/pcie-designware.h
> +++ b/drivers/pci/controller/dwc/pcie-designware.h
> @@ -190,8 +190,8 @@ struct pcie_port {
>  	int			msi_irq;
>  	struct irq_domain	*irq_domain;
>  	struct irq_domain	*msi_domain;
> -	u16			msi_msg;
>  	dma_addr_t		msi_data;
> +	struct page		*msi_page;
>  	struct irq_chip		*msi_irq_chip;
>  	u32			num_vectors;
>  	u32			irq_mask[MAX_MSI_CTRLS];
> -- 
> 2.36.1.124.g0e6072fb45-goog
> 

      parent reply	other threads:[~2022-06-13 22:29 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-25 22:33 [PATCH v1 1/1] PCI: dwc: Fix MSI msi_msg dma mapping Will McVicker
2022-06-02 22:46 ` Will McVicker
2022-06-13 21:09 ` Rob Herring
2022-06-13 22:29 ` Bjorn Helgaas [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220613222948.GA721845@bhelgaas \
    --to=helgaas@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=gustavo.pimentel@synopsys.com \
    --cc=jingoohan1@gmail.com \
    --cc=kernel-team@android.com \
    --cc=kw@linux.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=robh@kernel.org \
    --cc=vidyas@nvidia.com \
    --cc=willmcvicker@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.