All of lore.kernel.org
 help / color / mirror / Atom feed
From: Prabhakar Mahadev Lad <prabhakar.mahadev-lad.rj@bp.renesas.com>
To: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Rob Herring <robh+dt@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Geert Uytterhoeven <geert+renesas@glider.be>,
	Magnus Damm <magnus.damm@gmail.com>,
	Kishon Vijay Abraham I <kishon@ti.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Marek Vasut <marek.vasut+renesas@gmail.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Andrew Murray <andrew.murray@arm.com>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>,
	"linux-renesas-soc@vger.kernel.org" 
	<linux-renesas-soc@vger.kernel.org>,
	Chris Paterson <Chris.Paterson2@renesas.com>,
	Frank Rowand <frowand.list@gmail.com>,
	Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
	Jingoo Han <jingoohan1@gmail.com>,
	Simon Horman <horms@verge.net.au>,
	Shawn Lin <shawn.lin@rock-chips.com>,
	Tom Joseph <tjoseph@cadence.com>,
	Heiko Stuebner <heiko@sntech.de>,
	"linux-rockchip@lists.infradead.org" 
	<linux-rockchip@lists.infradead.org>,
	Lad Prabhakar <prabhakar.csengg@gmail.com>
Subject: RE: [PATCH v6 06/11] PCI: endpoint: Add support to handle multiple base for mapping outbound memory
Date: Fri, 3 Apr 2020 09:47:59 +0000	[thread overview]
Message-ID: <OSBPR01MB3590871E79F1B86A6F5B2247AAC70@OSBPR01MB3590.jpnprd01.prod.outlook.com> (raw)
In-Reply-To: <TYAPR01MB4544D205BE659CE74205737CD8C70@TYAPR01MB4544.jpnprd01.prod.outlook.com>

Hi Shimoda-san,

> -----Original Message-----
> From: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
> Sent: 03 April 2020 10:34
> To: Prabhakar Mahadev Lad <prabhakar.mahadev-lad.rj@bp.renesas.com>; Bjorn Helgaas <bhelgaas@google.com>; Rob Herring
> <robh+dt@kernel.org>; Mark Rutland <mark.rutland@arm.com>; Geert Uytterhoeven <geert+renesas@glider.be>; Magnus Damm
> <magnus.damm@gmail.com>; Kishon Vijay Abraham I <kishon@ti.com>; Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>; Marek Vasut
> <marek.vasut+renesas@gmail.com>; linux-pci@vger.kernel.org
> Cc: Catalin Marinas <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Arnd Bergmann <arnd@arndb.de>; Greg Kroah-Hartman
> <gregkh@linuxfoundation.org>; Andrew Murray <andrew.murray@arm.com>; devicetree@vger.kernel.org; linux-kernel@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org; linux-renesas-soc@vger.kernel.org; Chris Paterson <Chris.Paterson2@renesas.com>; Frank Rowand
> <frowand.list@gmail.com>; Gustavo Pimentel <gustavo.pimentel@synopsys.com>; Jingoo Han <jingoohan1@gmail.com>; Simon Horman
> <horms@verge.net.au>; Shawn Lin <shawn.lin@rock-chips.com>; Tom Joseph <tjoseph@cadence.com>; Heiko Stuebner
> <heiko@sntech.de>; linux-rockchip@lists.infradead.org; Lad Prabhakar <prabhakar.csengg@gmail.com>
> Subject: RE: [PATCH v6 06/11] PCI: endpoint: Add support to handle multiple base for mapping outbound memory
>
> Hi Prabhakar-san,
>
> > From: Prabhakar Mahadev Lad, Sent: Friday, April 3, 2020 6:12 PM
> <snip>
> > > > @@ -122,31 +167,56 @@ EXPORT_SYMBOL_GPL(pci_epc_mem_exit);
> > > >  void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
> > > >       phys_addr_t *phys_addr, size_t size)
> > > >  {
> > > > -int pageno;
> > > >  void __iomem *virt_addr = NULL;
> > > > -struct pci_epc_mem *mem = epc->mem;
> > > > -unsigned int page_shift = ilog2(mem->page_size);
> > > > +struct pci_epc_mem *mem;
> > > > +unsigned int page_shift;
> > > > +int pageno = -EINVAL;
> > > >  int order;
> > > > +int i;
> > > >
> > > > -size = ALIGN(size, mem->page_size);
> > > > -order = pci_epc_mem_get_order(mem, size);
> > > > -
> > > > -mutex_lock(&mem->lock);
> > > > -pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order);
> > > > -if (pageno < 0)
> > > > -goto ret;
> > > > +for (i = 0; i < epc->num_windows; i++) {
> > > > +mem = epc->windows[i];
> > > > +mutex_lock(&mem->lock);
> > >
> > > This is my feeling though, calling mutex_lock() in the loop seems
> > > to cause overhead. And, if we call mutex_lock() at out-of the loop,
> > > I think we can write single mutex_unlock() calling.
> > >
> > But the mutex is for each window, are you suggesting to add a global mutex ?
>
> Oops, that's right. So, I'd like to recall.
>
> > > > +size = ALIGN(size, mem->window.page_size);
> > >
> > > I'm sorry I should have realized this in the previous review,
> > > but overwriting this size is possible to cause an issue at second time or more loops.
> > > So, the first argument of ALIGN should be kept for the loop.
> > >
> > Could you please elaborate on this.
>
> My concern is the following.
>
> For example, the size of argument of pci_epc_mem_alloc_addr() is 4096.
> epc->windows[0].window.page_size = 8192
>  --> then the size will be changed to 0.
>
> epc->windows[1].window.page_size = 4096
>  --> since the size was changed to 0 on the first loop, the result is 0.
>      But, this should be 4096.
>
> Does such a case never happen?
> (Or, is my understanding incorrect?)
>
Good catch, yes that needs fixing probably by having a local variable for size.

Cheers,
--Prabhakar

> Best regards,
> Yoshihiro Shimoda
>
>
> > > > +order = pci_epc_mem_get_order(mem, size);
> > > >
> > > > -*phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift);
> > > > -virt_addr = ioremap(*phys_addr, size);
> > > > -if (!virt_addr)
> > > > -bitmap_release_region(mem->bitmap, pageno, order);
> > > > +pageno = bitmap_find_free_region(mem->bitmap, mem->pages,
> > > > + order);
> > > > +if (pageno >= 0) {
> > > > +page_shift = ilog2(mem->window.page_size);
> > > > +*phys_addr = mem->window.phys_base +
> > > > +((phys_addr_t)pageno << page_shift);
> > > > +virt_addr = ioremap(*phys_addr, size);
> > > > +if (!virt_addr)
> > > > +bitmap_release_region(mem->bitmap,
> > > > +      pageno, order);
> > > > +mutex_unlock(&mem->lock);
> > > > +return virt_addr;
> > >
> > > As I mentioned above, if mutex_lock() is called at out-of-loop,
> > > we can use "goto ret;" here like the original code,
> > >
> > > > +}
> > > > +mutex_unlock(&mem->lock);
> > >
> > > and we can remove this.
> > >
> > > > +}
> > > >
> > > > -ret:
> > > > -mutex_unlock(&mem->lock);
> > > >  return virt_addr;
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);
> > > >
> > > > +struct pci_epc_mem *pci_epc_get_matching_window(struct pci_epc *epc,
> > > > +phys_addr_t phys_addr)
> > > > +{
> > > > +struct pci_epc_mem *mem;
> > > > +int i;
> > > > +
> > > > +for (i = 0; i < epc->num_windows; i++) {
> > > > +mem = epc->windows[i];
> > > > +
> > > > +if (phys_addr >= mem->window.phys_base &&
> > > > +    phys_addr < (mem->window.phys_base + mem->window.size))
> > > > +return mem;
> > > > +}
> > > > +
> > > > +return NULL;
> > > > +}
> > > > +
> > > >  /**
> > > >   * pci_epc_mem_free_addr() - free the allocated memory address
> > > >   * @epc: the EPC device on which memory was allocated
> > > > @@ -159,14 +229,23 @@ EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);
> > > >  void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr,
> > > >     void __iomem *virt_addr, size_t size)
> > > >  {
> > > > +struct pci_epc_mem *mem;
> > > > +unsigned int page_shift;
> > > > +size_t page_size;
> > > >  int pageno;
> > > > -struct pci_epc_mem *mem = epc->mem;
> > > > -unsigned int page_shift = ilog2(mem->page_size);
> > > >  int order;
> > > >
> > > > +mem = pci_epc_get_matching_window(epc, phys_addr);
> > > > +if (!mem) {
> > > > +pr_err("failed to get matching window\n");
> > > > +return;
> > > > +}
> > > > +
> > > > +page_size = mem->window.page_size;
> > > > +page_shift = ilog2(page_size);
> > > >  iounmap(virt_addr);
> > > > -pageno = (phys_addr - mem->phys_base) >> page_shift;
> > > > -size = ALIGN(size, mem->page_size);
> > > > +pageno = (phys_addr - mem->window.phys_base) >> page_shift;
> > > > +size = ALIGN(size, page_size);
> > > >  order = pci_epc_mem_get_order(mem, size);
> > > >  mutex_lock(&mem->lock);
> > > >  bitmap_release_region(mem->bitmap, pageno, order);
> > > > diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h
> > > > index e0ed9d01f6e5..d5da11cf0f2a 100644
> > > > --- a/include/linux/pci-epc.h
> > > > +++ b/include/linux/pci-epc.h
> > > > @@ -65,20 +65,28 @@ struct pci_epc_ops {
> > > >  struct module *owner;
> > > >  };
> > > >
> > > > +/**
> > > > + * struct pci_epc_mem_window - address window of the endpoint controller
> > > > + * @phys_base: physical base address of the PCI address window
> > > > + * @size: the size of the PCI address window
> > > > + * @page_size: size of each page
> > > > + */
> > > > +struct pci_epc_mem_window {
> > > > +phys_addr_tphys_base;
> > > > +size_tsize;
> > > > +size_tpage_size;
> > > > +};
> > > > +
> > > >  /**
> > > >   * struct pci_epc_mem - address space of the endpoint controller
> > > > - * @phys_base: physical base address of the PCI address space
> > > > - * @size: the size of the PCI address space
> > > > + * @window: address window of the endpoint controller
> > > >   * @bitmap: bitmap to manage the PCI address space
> > > > - * @pages: number of bits representing the address region
> > > > - * @page_size: size of each page
> > > >   * @lock: mutex to protect bitmap
> > > > + * @pages: number of bits representing the address region
> > >
> > > Perhaps, we should not change the "@pages" line.
> > >
> > OK will drop this change.
> >
> > Cheers,
> > --Prabhakar
> >
> > > Best regards,
> > > Yoshihiro Shimoda



Renesas Electronics Europe GmbH, Geschaeftsfuehrer/President: Carsten Jauch, Sitz der Gesellschaft/Registered office: Duesseldorf, Arcadiastrasse 10, 40472 Duesseldorf, Germany, Handelsregister/Commercial Register: Duesseldorf, HRB 3708 USt-IDNr./Tax identification no.: DE 119353406 WEEE-Reg.-Nr./WEEE reg. no.: DE 14978647

WARNING: multiple messages have this Message-ID (diff)
From: Prabhakar Mahadev Lad <prabhakar.mahadev-lad.rj@bp.renesas.com>
To: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Rob Herring <robh+dt@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Geert Uytterhoeven <geert+renesas@glider.be>,
	Magnus Damm <magnus.damm@gmail.com>,
	Kishon Vijay Abraham I <kishon@ti.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Marek Vasut <marek.vasut+renesas@gmail.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Andrew Murray <andrew.murray@arm.com>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"linux-renesas-soc@vger.kernel.org"
	<linux-renesas-soc@vger.kernel.org>,
	Chris Paterson <Chris.Paterson2@renesas.com>,
	Frank Rowand <frowand.list@gmail.com>,
	Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
	Jingoo Han <jingoohan1@gmail.com>,
	Simon Horman <horms@verge.net.au>,
	Shawn Lin <shawn.lin@rock-chips.com>,
	Tom Joseph <tjoseph@cadence.com>,
	Heiko
Subject: RE: [PATCH v6 06/11] PCI: endpoint: Add support to handle multiple base for mapping outbound memory
Date: Fri, 3 Apr 2020 09:47:59 +0000	[thread overview]
Message-ID: <OSBPR01MB3590871E79F1B86A6F5B2247AAC70@OSBPR01MB3590.jpnprd01.prod.outlook.com> (raw)
In-Reply-To: <TYAPR01MB4544D205BE659CE74205737CD8C70@TYAPR01MB4544.jpnprd01.prod.outlook.com>

Hi Shimoda-san,

> -----Original Message-----
> From: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
> Sent: 03 April 2020 10:34
> To: Prabhakar Mahadev Lad <prabhakar.mahadev-lad.rj@bp.renesas.com>; Bjorn Helgaas <bhelgaas@google.com>; Rob Herring
> <robh+dt@kernel.org>; Mark Rutland <mark.rutland@arm.com>; Geert Uytterhoeven <geert+renesas@glider.be>; Magnus Damm
> <magnus.damm@gmail.com>; Kishon Vijay Abraham I <kishon@ti.com>; Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>; Marek Vasut
> <marek.vasut+renesas@gmail.com>; linux-pci@vger.kernel.org
> Cc: Catalin Marinas <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Arnd Bergmann <arnd@arndb.de>; Greg Kroah-Hartman
> <gregkh@linuxfoundation.org>; Andrew Murray <andrew.murray@arm.com>; devicetree@vger.kernel.org; linux-kernel@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org; linux-renesas-soc@vger.kernel.org; Chris Paterson <Chris.Paterson2@renesas.com>; Frank Rowand
> <frowand.list@gmail.com>; Gustavo Pimentel <gustavo.pimentel@synopsys.com>; Jingoo Han <jingoohan1@gmail.com>; Simon Horman
> <horms@verge.net.au>; Shawn Lin <shawn.lin@rock-chips.com>; Tom Joseph <tjoseph@cadence.com>; Heiko Stuebner
> <heiko@sntech.de>; linux-rockchip@lists.infradead.org; Lad Prabhakar <prabhakar.csengg@gmail.com>
> Subject: RE: [PATCH v6 06/11] PCI: endpoint: Add support to handle multiple base for mapping outbound memory
>
> Hi Prabhakar-san,
>
> > From: Prabhakar Mahadev Lad, Sent: Friday, April 3, 2020 6:12 PM
> <snip>
> > > > @@ -122,31 +167,56 @@ EXPORT_SYMBOL_GPL(pci_epc_mem_exit);
> > > >  void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
> > > >       phys_addr_t *phys_addr, size_t size)
> > > >  {
> > > > -int pageno;
> > > >  void __iomem *virt_addr = NULL;
> > > > -struct pci_epc_mem *mem = epc->mem;
> > > > -unsigned int page_shift = ilog2(mem->page_size);
> > > > +struct pci_epc_mem *mem;
> > > > +unsigned int page_shift;
> > > > +int pageno = -EINVAL;
> > > >  int order;
> > > > +int i;
> > > >
> > > > -size = ALIGN(size, mem->page_size);
> > > > -order = pci_epc_mem_get_order(mem, size);
> > > > -
> > > > -mutex_lock(&mem->lock);
> > > > -pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order);
> > > > -if (pageno < 0)
> > > > -goto ret;
> > > > +for (i = 0; i < epc->num_windows; i++) {
> > > > +mem = epc->windows[i];
> > > > +mutex_lock(&mem->lock);
> > >
> > > This is my feeling though, calling mutex_lock() in the loop seems
> > > to cause overhead. And, if we call mutex_lock() at out-of the loop,
> > > I think we can write single mutex_unlock() calling.
> > >
> > But the mutex is for each window, are you suggesting to add a global mutex ?
>
> Oops, that's right. So, I'd like to recall.
>
> > > > +size = ALIGN(size, mem->window.page_size);
> > >
> > > I'm sorry I should have realized this in the previous review,
> > > but overwriting this size is possible to cause an issue at second time or more loops.
> > > So, the first argument of ALIGN should be kept for the loop.
> > >
> > Could you please elaborate on this.
>
> My concern is the following.
>
> For example, the size of argument of pci_epc_mem_alloc_addr() is 4096.
> epc->windows[0].window.page_size = 8192
>  --> then the size will be changed to 0.
>
> epc->windows[1].window.page_size = 4096
>  --> since the size was changed to 0 on the first loop, the result is 0.
>      But, this should be 4096.
>
> Does such a case never happen?
> (Or, is my understanding incorrect?)
>
Good catch, yes that needs fixing probably by having a local variable for size.

Cheers,
--Prabhakar

> Best regards,
> Yoshihiro Shimoda
>
>
> > > > +order = pci_epc_mem_get_order(mem, size);
> > > >
> > > > -*phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift);
> > > > -virt_addr = ioremap(*phys_addr, size);
> > > > -if (!virt_addr)
> > > > -bitmap_release_region(mem->bitmap, pageno, order);
> > > > +pageno = bitmap_find_free_region(mem->bitmap, mem->pages,
> > > > + order);
> > > > +if (pageno >= 0) {
> > > > +page_shift = ilog2(mem->window.page_size);
> > > > +*phys_addr = mem->window.phys_base +
> > > > +((phys_addr_t)pageno << page_shift);
> > > > +virt_addr = ioremap(*phys_addr, size);
> > > > +if (!virt_addr)
> > > > +bitmap_release_region(mem->bitmap,
> > > > +      pageno, order);
> > > > +mutex_unlock(&mem->lock);
> > > > +return virt_addr;
> > >
> > > As I mentioned above, if mutex_lock() is called at out-of-loop,
> > > we can use "goto ret;" here like the original code,
> > >
> > > > +}
> > > > +mutex_unlock(&mem->lock);
> > >
> > > and we can remove this.
> > >
> > > > +}
> > > >
> > > > -ret:
> > > > -mutex_unlock(&mem->lock);
> > > >  return virt_addr;
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);
> > > >
> > > > +struct pci_epc_mem *pci_epc_get_matching_window(struct pci_epc *epc,
> > > > +phys_addr_t phys_addr)
> > > > +{
> > > > +struct pci_epc_mem *mem;
> > > > +int i;
> > > > +
> > > > +for (i = 0; i < epc->num_windows; i++) {
> > > > +mem = epc->windows[i];
> > > > +
> > > > +if (phys_addr >= mem->window.phys_base &&
> > > > +    phys_addr < (mem->window.phys_base + mem->window.size))
> > > > +return mem;
> > > > +}
> > > > +
> > > > +return NULL;
> > > > +}
> > > > +
> > > >  /**
> > > >   * pci_epc_mem_free_addr() - free the allocated memory address
> > > >   * @epc: the EPC device on which memory was allocated
> > > > @@ -159,14 +229,23 @@ EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);
> > > >  void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr,
> > > >     void __iomem *virt_addr, size_t size)
> > > >  {
> > > > +struct pci_epc_mem *mem;
> > > > +unsigned int page_shift;
> > > > +size_t page_size;
> > > >  int pageno;
> > > > -struct pci_epc_mem *mem = epc->mem;
> > > > -unsigned int page_shift = ilog2(mem->page_size);
> > > >  int order;
> > > >
> > > > +mem = pci_epc_get_matching_window(epc, phys_addr);
> > > > +if (!mem) {
> > > > +pr_err("failed to get matching window\n");
> > > > +return;
> > > > +}
> > > > +
> > > > +page_size = mem->window.page_size;
> > > > +page_shift = ilog2(page_size);
> > > >  iounmap(virt_addr);
> > > > -pageno = (phys_addr - mem->phys_base) >> page_shift;
> > > > -size = ALIGN(size, mem->page_size);
> > > > +pageno = (phys_addr - mem->window.phys_base) >> page_shift;
> > > > +size = ALIGN(size, page_size);
> > > >  order = pci_epc_mem_get_order(mem, size);
> > > >  mutex_lock(&mem->lock);
> > > >  bitmap_release_region(mem->bitmap, pageno, order);
> > > > diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h
> > > > index e0ed9d01f6e5..d5da11cf0f2a 100644
> > > > --- a/include/linux/pci-epc.h
> > > > +++ b/include/linux/pci-epc.h
> > > > @@ -65,20 +65,28 @@ struct pci_epc_ops {
> > > >  struct module *owner;
> > > >  };
> > > >
> > > > +/**
> > > > + * struct pci_epc_mem_window - address window of the endpoint controller
> > > > + * @phys_base: physical base address of the PCI address window
> > > > + * @size: the size of the PCI address window
> > > > + * @page_size: size of each page
> > > > + */
> > > > +struct pci_epc_mem_window {
> > > > +phys_addr_tphys_base;
> > > > +size_tsize;
> > > > +size_tpage_size;
> > > > +};
> > > > +
> > > >  /**
> > > >   * struct pci_epc_mem - address space of the endpoint controller
> > > > - * @phys_base: physical base address of the PCI address space
> > > > - * @size: the size of the PCI address space
> > > > + * @window: address window of the endpoint controller
> > > >   * @bitmap: bitmap to manage the PCI address space
> > > > - * @pages: number of bits representing the address region
> > > > - * @page_size: size of each page
> > > >   * @lock: mutex to protect bitmap
> > > > + * @pages: number of bits representing the address region
> > >
> > > Perhaps, we should not change the "@pages" line.
> > >
> > OK will drop this change.
> >
> > Cheers,
> > --Prabhakar
> >
> > > Best regards,
> > > Yoshihiro Shimoda



Renesas Electronics Europe GmbH, Geschaeftsfuehrer/President: Carsten Jauch, Sitz der Gesellschaft/Registered office: Duesseldorf, Arcadiastrasse 10, 40472 Duesseldorf, Germany, Handelsregister/Commercial Register: Duesseldorf, HRB 3708 USt-IDNr./Tax identification no.: DE 119353406 WEEE-Reg.-Nr./WEEE reg. no.: DE 14978647

WARNING: multiple messages have this Message-ID (diff)
From: Prabhakar Mahadev Lad <prabhakar.mahadev-lad.rj@bp.renesas.com>
To: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Rob Herring <robh+dt@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Geert Uytterhoeven <geert+renesas@glider.be>,
	Magnus Damm <magnus.damm@gmail.com>,
	Kishon Vijay Abraham I <kishon@ti.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Marek Vasut <marek.vasut+renesas@gmail.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>
Cc: "devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	Chris Paterson <Chris.Paterson2@renesas.com>,
	Heiko Stuebner <heiko@sntech.de>, Arnd Bergmann <arnd@arndb.de>,
	Jingoo Han <jingoohan1@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Shawn Lin <shawn.lin@rock-chips.com>,
	Frank Rowand <frowand.list@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-renesas-soc@vger.kernel.org"
	<linux-renesas-soc@vger.kernel.org>,
	"linux-rockchip@lists.infradead.org"
	<linux-rockchip@lists.infradead.org>,
	Tom Joseph <tjoseph@cadence.com>,
	Simon Horman <horms@verge.net.au>,
	Lad Prabhakar <prabhakar.csengg@gmail.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Andrew Murray <andrew.murray@arm.com>,
	Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
	Will Deacon <will@kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: RE: [PATCH v6 06/11] PCI: endpoint: Add support to handle multiple base for mapping outbound memory
Date: Fri, 3 Apr 2020 09:47:59 +0000	[thread overview]
Message-ID: <OSBPR01MB3590871E79F1B86A6F5B2247AAC70@OSBPR01MB3590.jpnprd01.prod.outlook.com> (raw)
In-Reply-To: <TYAPR01MB4544D205BE659CE74205737CD8C70@TYAPR01MB4544.jpnprd01.prod.outlook.com>

Hi Shimoda-san,

> -----Original Message-----
> From: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
> Sent: 03 April 2020 10:34
> To: Prabhakar Mahadev Lad <prabhakar.mahadev-lad.rj@bp.renesas.com>; Bjorn Helgaas <bhelgaas@google.com>; Rob Herring
> <robh+dt@kernel.org>; Mark Rutland <mark.rutland@arm.com>; Geert Uytterhoeven <geert+renesas@glider.be>; Magnus Damm
> <magnus.damm@gmail.com>; Kishon Vijay Abraham I <kishon@ti.com>; Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>; Marek Vasut
> <marek.vasut+renesas@gmail.com>; linux-pci@vger.kernel.org
> Cc: Catalin Marinas <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Arnd Bergmann <arnd@arndb.de>; Greg Kroah-Hartman
> <gregkh@linuxfoundation.org>; Andrew Murray <andrew.murray@arm.com>; devicetree@vger.kernel.org; linux-kernel@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org; linux-renesas-soc@vger.kernel.org; Chris Paterson <Chris.Paterson2@renesas.com>; Frank Rowand
> <frowand.list@gmail.com>; Gustavo Pimentel <gustavo.pimentel@synopsys.com>; Jingoo Han <jingoohan1@gmail.com>; Simon Horman
> <horms@verge.net.au>; Shawn Lin <shawn.lin@rock-chips.com>; Tom Joseph <tjoseph@cadence.com>; Heiko Stuebner
> <heiko@sntech.de>; linux-rockchip@lists.infradead.org; Lad Prabhakar <prabhakar.csengg@gmail.com>
> Subject: RE: [PATCH v6 06/11] PCI: endpoint: Add support to handle multiple base for mapping outbound memory
>
> Hi Prabhakar-san,
>
> > From: Prabhakar Mahadev Lad, Sent: Friday, April 3, 2020 6:12 PM
> <snip>
> > > > @@ -122,31 +167,56 @@ EXPORT_SYMBOL_GPL(pci_epc_mem_exit);
> > > >  void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
> > > >       phys_addr_t *phys_addr, size_t size)
> > > >  {
> > > > -int pageno;
> > > >  void __iomem *virt_addr = NULL;
> > > > -struct pci_epc_mem *mem = epc->mem;
> > > > -unsigned int page_shift = ilog2(mem->page_size);
> > > > +struct pci_epc_mem *mem;
> > > > +unsigned int page_shift;
> > > > +int pageno = -EINVAL;
> > > >  int order;
> > > > +int i;
> > > >
> > > > -size = ALIGN(size, mem->page_size);
> > > > -order = pci_epc_mem_get_order(mem, size);
> > > > -
> > > > -mutex_lock(&mem->lock);
> > > > -pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order);
> > > > -if (pageno < 0)
> > > > -goto ret;
> > > > +for (i = 0; i < epc->num_windows; i++) {
> > > > +mem = epc->windows[i];
> > > > +mutex_lock(&mem->lock);
> > >
> > > This is my feeling though, calling mutex_lock() in the loop seems
> > > to cause overhead. And, if we call mutex_lock() at out-of the loop,
> > > I think we can write single mutex_unlock() calling.
> > >
> > But the mutex is for each window, are you suggesting to add a global mutex ?
>
> Oops, that's right. So, I'd like to recall.
>
> > > > +size = ALIGN(size, mem->window.page_size);
> > >
> > > I'm sorry I should have realized this in the previous review,
> > > but overwriting this size is possible to cause an issue at second time or more loops.
> > > So, the first argument of ALIGN should be kept for the loop.
> > >
> > Could you please elaborate on this.
>
> My concern is the following.
>
> For example, the size of argument of pci_epc_mem_alloc_addr() is 4096.
> epc->windows[0].window.page_size = 8192
>  --> then the size will be changed to 0.
>
> epc->windows[1].window.page_size = 4096
>  --> since the size was changed to 0 on the first loop, the result is 0.
>      But, this should be 4096.
>
> Does such a case never happen?
> (Or, is my understanding incorrect?)
>
Good catch, yes that needs fixing probably by having a local variable for size.

Cheers,
--Prabhakar

> Best regards,
> Yoshihiro Shimoda
>
>
> > > > +order = pci_epc_mem_get_order(mem, size);
> > > >
> > > > -*phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift);
> > > > -virt_addr = ioremap(*phys_addr, size);
> > > > -if (!virt_addr)
> > > > -bitmap_release_region(mem->bitmap, pageno, order);
> > > > +pageno = bitmap_find_free_region(mem->bitmap, mem->pages,
> > > > + order);
> > > > +if (pageno >= 0) {
> > > > +page_shift = ilog2(mem->window.page_size);
> > > > +*phys_addr = mem->window.phys_base +
> > > > +((phys_addr_t)pageno << page_shift);
> > > > +virt_addr = ioremap(*phys_addr, size);
> > > > +if (!virt_addr)
> > > > +bitmap_release_region(mem->bitmap,
> > > > +      pageno, order);
> > > > +mutex_unlock(&mem->lock);
> > > > +return virt_addr;
> > >
> > > As I mentioned above, if mutex_lock() is called at out-of-loop,
> > > we can use "goto ret;" here like the original code,
> > >
> > > > +}
> > > > +mutex_unlock(&mem->lock);
> > >
> > > and we can remove this.
> > >
> > > > +}
> > > >
> > > > -ret:
> > > > -mutex_unlock(&mem->lock);
> > > >  return virt_addr;
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);
> > > >
> > > > +struct pci_epc_mem *pci_epc_get_matching_window(struct pci_epc *epc,
> > > > +phys_addr_t phys_addr)
> > > > +{
> > > > +struct pci_epc_mem *mem;
> > > > +int i;
> > > > +
> > > > +for (i = 0; i < epc->num_windows; i++) {
> > > > +mem = epc->windows[i];
> > > > +
> > > > +if (phys_addr >= mem->window.phys_base &&
> > > > +    phys_addr < (mem->window.phys_base + mem->window.size))
> > > > +return mem;
> > > > +}
> > > > +
> > > > +return NULL;
> > > > +}
> > > > +
> > > >  /**
> > > >   * pci_epc_mem_free_addr() - free the allocated memory address
> > > >   * @epc: the EPC device on which memory was allocated
> > > > @@ -159,14 +229,23 @@ EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);
> > > >  void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr,
> > > >     void __iomem *virt_addr, size_t size)
> > > >  {
> > > > +struct pci_epc_mem *mem;
> > > > +unsigned int page_shift;
> > > > +size_t page_size;
> > > >  int pageno;
> > > > -struct pci_epc_mem *mem = epc->mem;
> > > > -unsigned int page_shift = ilog2(mem->page_size);
> > > >  int order;
> > > >
> > > > +mem = pci_epc_get_matching_window(epc, phys_addr);
> > > > +if (!mem) {
> > > > +pr_err("failed to get matching window\n");
> > > > +return;
> > > > +}
> > > > +
> > > > +page_size = mem->window.page_size;
> > > > +page_shift = ilog2(page_size);
> > > >  iounmap(virt_addr);
> > > > -pageno = (phys_addr - mem->phys_base) >> page_shift;
> > > > -size = ALIGN(size, mem->page_size);
> > > > +pageno = (phys_addr - mem->window.phys_base) >> page_shift;
> > > > +size = ALIGN(size, page_size);
> > > >  order = pci_epc_mem_get_order(mem, size);
> > > >  mutex_lock(&mem->lock);
> > > >  bitmap_release_region(mem->bitmap, pageno, order);
> > > > diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h
> > > > index e0ed9d01f6e5..d5da11cf0f2a 100644
> > > > --- a/include/linux/pci-epc.h
> > > > +++ b/include/linux/pci-epc.h
> > > > @@ -65,20 +65,28 @@ struct pci_epc_ops {
> > > >  struct module *owner;
> > > >  };
> > > >
> > > > +/**
> > > > + * struct pci_epc_mem_window - address window of the endpoint controller
> > > > + * @phys_base: physical base address of the PCI address window
> > > > + * @size: the size of the PCI address window
> > > > + * @page_size: size of each page
> > > > + */
> > > > +struct pci_epc_mem_window {
> > > > +phys_addr_tphys_base;
> > > > +size_tsize;
> > > > +size_tpage_size;
> > > > +};
> > > > +
> > > >  /**
> > > >   * struct pci_epc_mem - address space of the endpoint controller
> > > > - * @phys_base: physical base address of the PCI address space
> > > > - * @size: the size of the PCI address space
> > > > + * @window: address window of the endpoint controller
> > > >   * @bitmap: bitmap to manage the PCI address space
> > > > - * @pages: number of bits representing the address region
> > > > - * @page_size: size of each page
> > > >   * @lock: mutex to protect bitmap
> > > > + * @pages: number of bits representing the address region
> > >
> > > Perhaps, we should not change the "@pages" line.
> > >
> > OK will drop this change.
> >
> > Cheers,
> > --Prabhakar
> >
> > > Best regards,
> > > Yoshihiro Shimoda



Renesas Electronics Europe GmbH, Geschaeftsfuehrer/President: Carsten Jauch, Sitz der Gesellschaft/Registered office: Duesseldorf, Arcadiastrasse 10, 40472 Duesseldorf, Germany, Handelsregister/Commercial Register: Duesseldorf, HRB 3708 USt-IDNr./Tax identification no.: DE 119353406 WEEE-Reg.-Nr./WEEE reg. no.: DE 14978647

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-04-03  9:48 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-02 19:38 [PATCH v6 00/11] Add support for PCIe controller to work in endpoint mode on R-Car SoCs Lad Prabhakar
2020-04-02 19:38 ` Lad Prabhakar
2020-04-02 19:38 ` Lad Prabhakar
2020-04-02 19:38 ` [PATCH v6 01/11] PCI: rcar: Rename pcie-rcar.c to pcie-rcar-host.c Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-03  7:52   ` Yoshihiro Shimoda
2020-04-03  7:52     ` Yoshihiro Shimoda
2020-04-03  7:52     ` Yoshihiro Shimoda
2020-04-02 19:38 ` [PATCH v6 02/11] arm64: defconfig: enable CONFIG_PCIE_RCAR_HOST Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-03  8:03   ` Yoshihiro Shimoda
2020-04-03  8:03     ` Yoshihiro Shimoda
2020-04-03  8:03     ` Yoshihiro Shimoda
2020-04-03  8:28     ` Geert Uytterhoeven
2020-04-03  8:28       ` Geert Uytterhoeven
2020-04-03  8:28       ` Geert Uytterhoeven
2020-04-03  9:04       ` Yoshihiro Shimoda
2020-04-03  9:04         ` Yoshihiro Shimoda
2020-04-03  9:04         ` Yoshihiro Shimoda
2020-04-03  9:31         ` Prabhakar Mahadev Lad
2020-04-03  9:31           ` Prabhakar Mahadev Lad
2020-04-03  9:31           ` Prabhakar Mahadev Lad
2020-04-03  9:44           ` Yoshihiro Shimoda
2020-04-03  9:44             ` Yoshihiro Shimoda
2020-04-03  9:44             ` Yoshihiro Shimoda
2020-04-02 19:38 ` [PATCH v6 03/11] PCI: drop PCIE_RCAR config option Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38 ` [PATCH v6 04/11] PCI: rcar: Move shareable code to a common file Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-03  8:06   ` Yoshihiro Shimoda
2020-04-03  8:06     ` Yoshihiro Shimoda
2020-04-03  8:06     ` Yoshihiro Shimoda
2020-04-02 19:38 ` [PATCH v6 05/11] PCI: rcar: Fix calculating mask for PCIEPAMR register Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38 ` [PATCH v6 06/11] PCI: endpoint: Add support to handle multiple base for mapping outbound memory Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-03  8:23   ` Yoshihiro Shimoda
2020-04-03  8:23     ` Yoshihiro Shimoda
2020-04-03  8:23     ` Yoshihiro Shimoda
2020-04-03  9:11     ` Prabhakar Mahadev Lad
2020-04-03  9:11       ` Prabhakar Mahadev Lad
2020-04-03  9:11       ` Prabhakar Mahadev Lad
2020-04-03  9:34       ` Yoshihiro Shimoda
2020-04-03  9:34         ` Yoshihiro Shimoda
2020-04-03  9:34         ` Yoshihiro Shimoda
2020-04-03  9:47         ` Prabhakar Mahadev Lad [this message]
2020-04-03  9:47           ` Prabhakar Mahadev Lad
2020-04-03  9:47           ` Prabhakar Mahadev Lad
2020-04-02 19:38 ` [PATCH v6 07/11] dt-bindings: PCI: rcar: Add bindings for R-Car PCIe endpoint controller Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-03  8:28   ` Yoshihiro Shimoda
2020-04-03  8:28     ` Yoshihiro Shimoda
2020-04-03  8:28     ` Yoshihiro Shimoda
2020-04-03  9:02     ` Prabhakar Mahadev Lad
2020-04-03  9:02       ` Prabhakar Mahadev Lad
2020-04-03  9:02       ` Prabhakar Mahadev Lad
2020-04-02 19:38 ` [PATCH v6 08/11] PCI: rcar: Add support for rcar PCIe controller in endpoint mode Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-03  8:44   ` Yoshihiro Shimoda
2020-04-03  8:44     ` Yoshihiro Shimoda
2020-04-03  8:44     ` Yoshihiro Shimoda
2020-04-03  8:58     ` Prabhakar Mahadev Lad
2020-04-03  8:58       ` Prabhakar Mahadev Lad
2020-04-03  8:58       ` Prabhakar Mahadev Lad
2020-04-02 19:38 ` [PATCH v6 09/11] PCI: Add Renesas R8A774C0 device ID Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-03  8:46   ` Yoshihiro Shimoda
2020-04-03  8:46     ` Yoshihiro Shimoda
2020-04-03  8:46     ` Yoshihiro Shimoda
2020-04-02 19:38 ` [PATCH v6 10/11] misc: pci_endpoint_test: Add Device ID for RZ/G2E PCIe controller Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-03  8:47   ` Yoshihiro Shimoda
2020-04-03  8:47     ` Yoshihiro Shimoda
2020-04-03  8:47     ` Yoshihiro Shimoda
2020-04-02 19:38 ` [PATCH v6 11/11] MAINTAINERS: Add file patterns for rcar PCI device tree bindings Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-02 19:38   ` Lad Prabhakar
2020-04-03  8:48   ` Yoshihiro Shimoda
2020-04-03  8:48     ` Yoshihiro Shimoda
2020-04-03  8:48     ` Yoshihiro Shimoda

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=OSBPR01MB3590871E79F1B86A6F5B2247AAC70@OSBPR01MB3590.jpnprd01.prod.outlook.com \
    --to=prabhakar.mahadev-lad.rj@bp.renesas.com \
    --cc=Chris.Paterson2@renesas.com \
    --cc=andrew.murray@arm.com \
    --cc=arnd@arndb.de \
    --cc=bhelgaas@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=devicetree@vger.kernel.org \
    --cc=frowand.list@gmail.com \
    --cc=geert+renesas@glider.be \
    --cc=gregkh@linuxfoundation.org \
    --cc=gustavo.pimentel@synopsys.com \
    --cc=heiko@sntech.de \
    --cc=horms@verge.net.au \
    --cc=jingoohan1@gmail.com \
    --cc=kishon@ti.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-renesas-soc@vger.kernel.org \
    --cc=linux-rockchip@lists.infradead.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=magnus.damm@gmail.com \
    --cc=marek.vasut+renesas@gmail.com \
    --cc=mark.rutland@arm.com \
    --cc=prabhakar.csengg@gmail.com \
    --cc=robh+dt@kernel.org \
    --cc=shawn.lin@rock-chips.com \
    --cc=tjoseph@cadence.com \
    --cc=will@kernel.org \
    --cc=yoshihiro.shimoda.uh@renesas.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.