From: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
To: Prabhakar Mahadev Lad <prabhakar.mahadev-lad.rj@bp.renesas.com>,
Lad Prabhakar <prabhakar.csengg@gmail.com>
Cc: Andrew Murray <andrew.murray@arm.com>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"linux-renesas-soc@vger.kernel.org"
<linux-renesas-soc@vger.kernel.org>,
"linux-rockchip@lists.infradead.org"
<linux-rockchip@lists.infradead.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
Bjorn Helgaas <bhelgaas@google.com>,
Rob Herring <robh+dt@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Kishon Vijay Abraham I <kishon@ti.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
Arnd Bergmann <arnd@arndb.de>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Jingoo Han <jingoohan1@gmail.com>,
Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
Marek Vasut <marek.vasut+renesas@gmail.com>,
Shawn Lin <shawn.lin@rock-chips.com>,
Heiko Stuebner <heiko@sntech.de>
Subject: RE: [PATCH v5 4/7] PCI: endpoint: Add support to handle multiple base for mapping outbound memory
Date: Tue, 17 Mar 2020 11:04:47 +0000 [thread overview]
Message-ID: <TYAPR01MB4544DD58C495D0ED223B1A7BD8F60@TYAPR01MB4544.jpnprd01.prod.outlook.com> (raw)
In-Reply-To: <OSBPR01MB359001B994CFC0CB45170AB0AAF60@OSBPR01MB3590.jpnprd01.prod.outlook.com>
Hi Prabhakar-san,
Just in my opinion though, automatically adding new line of email client
should be disabled or setting larger characters.
# In my side, I change the setting as 132 characters on Outlook :)
> From: Prabhakar Mahadev Lad, Sent: Tuesday, March 17, 2020 7:04 PM
<snip>
> > > -int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base,
> > size_t size,
> > > - size_t page_size)
> > > +int __pci_epc_mem_init(struct pci_epc *epc, struct
> > pci_epc_mem_window *windows,
> > > + int num_windows)
> > > {
> > > - int ret;
> > > - struct pci_epc_mem *mem;
> > > - unsigned long *bitmap;
> > > + struct pci_epc_mem *mem = NULL;
> > > + unsigned long *bitmap = NULL;
> > > unsigned int page_shift;
> > > - int pages;
> > > + size_t page_size;
> > > int bitmap_size;
> > > -
> > > - if (page_size < PAGE_SIZE)
> > > - page_size = PAGE_SIZE;
> > > -
> > > - page_shift = ilog2(page_size);
> > > - pages = size >> page_shift;
> > > - bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
> > > -
> > > - mem = kzalloc(sizeof(*mem), GFP_KERNEL);
> > > - if (!mem) {
> > > - ret = -ENOMEM;
> > > - goto err;
> > > - }
> > > -
> > > - bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> > > - if (!bitmap) {
> > > - ret = -ENOMEM;
> > > - goto err_mem;
> > > + int pages;
> > > + int ret;
> > > + int i;
> > > +
> > > + epc->mem_windows = 0;
> > > +
> > > + if (!windows)
> > > + return -EINVAL;
> > > +
> > > + if (num_windows <= 0)
> > > + return -EINVAL;
> > > +
> > > + epc->mem = kcalloc(num_windows, sizeof(*mem), GFP_KERNEL);
> > > + if (!epc->mem)
> > > + return -EINVAL;
> > > +
> > > + for (i = 0; i < num_windows; i++) {
> > > + page_size = windows[i].page_size;
> > > + if (page_size < PAGE_SIZE)
> > > + page_size = PAGE_SIZE;
> > > + page_shift = ilog2(page_size);
> > > + pages = windows[i].size >> page_shift;
> > > + bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
> > > +
> > > + mem = kzalloc(sizeof(*mem), GFP_KERNEL);
> > > + if (!mem) {
> > > + ret = -ENOMEM;
> > > + goto err_mem;
> > > + }
> > > +
> > > + bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> > > + if (!bitmap) {
> > > + ret = -ENOMEM;
> > > + goto err_mem;
> > > + }
> > > +
> > > + mem->bitmap = bitmap;
> > > + mem->window.phys_base = windows[i].phys_base;
> >
> > I could not understand why the window member is needed.
> > I think original members (just phys_base and size) are enough.
> > Also, this function doesn't store the page_size to mem->window.page_size.
I'm sorry, but I meant the window member is in the left side (mem->window.phys_base).
In other words, this patch changes the struct pci_epc_mem like below, but
I'm thinking this change is not needed because struct pci_epc will have
multiple windows as "array of address space of the endpoint controller".
---
struct pci_epc_mem {
- phys_addr_t phys_base;
- size_t size;
+ struct pci_epc_mem_window window;
---
> Because, for example on RZ/Gx platforms, following are the windows on endpoint device
> where the root's address can be mapped, but where as on other platforms atm there
> exists just single window to map. Also on RZ/Gx platforms if a window is allocated say of
> size 1K, rest of the window cannot be used for other allocations.
>
> 1: 0xfe000000 0 0x80000
> 2: 0xfe100000 0 0x100000
> 3: 0xfe200000 0 0x200000
> 4: 0x30000000 0 0x8000000
> 5: 0x38000000 0 0x8000000
>
> Struct pci_epc_mem_window, represents the above windows.
Yes, I understood it.
> window.page_size is set by endpoint controller drivers as done in this patch.
I meant the left side. No one change the mem->window.page_size so that
the value seems to be 0. Of course, for now, this is no problem because
no one uses this value though.
<snip>
> > > /**
> > > * struct pci_epc_mem - address space of the endpoint controller
> > > - * @phys_base: physical base address of the PCI address space
> > > - * @size: the size of the PCI address space
> > > + * @window: address window of the endpoint controller
> > > * @bitmap: bitmap to manage the PCI address space
> > > - * @pages: number of bits representing the address region
> > > * @page_size: size of each page
> > > + * @pages: number of bits representing the address region
> > > */
> > > struct pci_epc_mem {
> > > - phys_addr_t phys_base;
> > > - size_t size;
> > > + struct pci_epc_mem_window window;
> > > unsigned long *bitmap;
> > > size_t page_size;
> > > int pages;
> > > @@ -85,7 +97,8 @@ struct pci_epc_mem {
> > > * @dev: PCI EPC device
> > > * @pci_epf: list of endpoint functions present in this EPC device
> > > * @ops: function pointers for performing endpoint operations
> > > - * @mem: address space of the endpoint controller
> >
> > If my idea is acceptable, this should be "default address space ...".
> >
> Could you please elaborate more on how you would like the structures to be organized.
* @mem: default address space of the endpoint controller.
And, if I assumed the "array of address space of the endpoint controller"
is renamed as struct pci_epc_mem **windows and when __pci_epc_mem_init() is succeeded,
the function should set the mem value right before return as the first window like below.
+ epc->mem = epc->windows[0];
+ epc->num_windows = num_windows;
return 0;
Best regards,
Yoshihiro Shimoda
next prev parent reply other threads:[~2020-03-17 11:19 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-28 15:41 [PATCH v5 0/7] Add support for PCIe controller to work in endpoint mode on R-Car SoCs Lad Prabhakar
2020-02-28 15:41 ` [PATCH v5 1/7] PCI: rcar: Rename pcie-rcar.c to pcie-rcar-host.c Lad Prabhakar
2020-03-17 6:59 ` Yoshihiro Shimoda
2020-03-17 8:26 ` Prabhakar Mahadev Lad
2020-02-28 15:41 ` [PATCH v5 2/7] PCI: rcar: Move shareable code to a common file Lad Prabhakar
2020-03-17 7:23 ` Yoshihiro Shimoda
2020-03-17 8:29 ` Prabhakar Mahadev Lad
2020-02-28 15:41 ` [PATCH v5 3/7] PCI: rcar: Fix calculating mask for PCIEPAMR register Lad Prabhakar
2020-03-17 7:31 ` Yoshihiro Shimoda
2020-02-28 15:41 ` [PATCH v5 4/7] PCI: endpoint: Add support to handle multiple base for mapping outbound memory Lad Prabhakar
2020-03-17 8:11 ` Yoshihiro Shimoda
2020-03-17 10:03 ` Prabhakar Mahadev Lad
2020-03-17 11:04 ` Yoshihiro Shimoda [this message]
2020-02-28 15:41 ` [PATCH v5 5/7] dt-bindings: PCI: rcar: Add bindings for R-Car PCIe endpoint controller Lad Prabhakar
2020-03-17 8:26 ` Yoshihiro Shimoda
2020-03-17 10:18 ` Prabhakar Mahadev Lad
2020-03-17 11:11 ` Yoshihiro Shimoda
2020-02-28 15:41 ` [PATCH v5 6/7] PCI: rcar: Add support for rcar PCIe controller in endpoint mode Lad Prabhakar
2020-03-17 9:59 ` Yoshihiro Shimoda
2020-03-17 10:14 ` Prabhakar Mahadev Lad
2020-02-28 15:41 ` [PATCH v5 7/7] misc: pci_endpoint_test: Add Device ID for RZ/G2E PCIe controller Lad Prabhakar
2020-03-17 10:31 ` Yoshihiro Shimoda
2020-03-17 10:36 ` Prabhakar Mahadev Lad
2020-03-13 15:46 ` [PATCH v5 0/7] Add support for PCIe controller to work in endpoint mode on R-Car SoCs Lad, Prabhakar
2020-03-16 12:18 ` Lorenzo Pieralisi
2020-03-16 12:33 ` Lad, Prabhakar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=TYAPR01MB4544DD58C495D0ED223B1A7BD8F60@TYAPR01MB4544.jpnprd01.prod.outlook.com \
--to=yoshihiro.shimoda.uh@renesas.com \
--cc=andrew.murray@arm.com \
--cc=arnd@arndb.de \
--cc=bhelgaas@google.com \
--cc=catalin.marinas@arm.com \
--cc=devicetree@vger.kernel.org \
--cc=gregkh@linuxfoundation.org \
--cc=gustavo.pimentel@synopsys.com \
--cc=heiko@sntech.de \
--cc=jingoohan1@gmail.com \
--cc=kishon@ti.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-renesas-soc@vger.kernel.org \
--cc=linux-rockchip@lists.infradead.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=marek.vasut+renesas@gmail.com \
--cc=mark.rutland@arm.com \
--cc=prabhakar.csengg@gmail.com \
--cc=prabhakar.mahadev-lad.rj@bp.renesas.com \
--cc=robh+dt@kernel.org \
--cc=shawn.lin@rock-chips.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).