From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: <linux-cxl@vger.kernel.org>, <ben.widawsky@intel.com>,
<vishal.l.verma@intel.com>, <alison.schofield@intel.com>,
<ira.weiny@intel.com>, <linux-pci@vger.kernel.org>
Subject: Re: [PATCH 2/8] cxl/pci: Cleanup cxl_map_device_regs()
Date: Thu, 17 Mar 2022 10:07:57 +0000 [thread overview]
Message-ID: <20220317100757.00005f2b@Huawei.com> (raw)
In-Reply-To: <164740403286.3912056.2514975283929305856.stgit@dwillia2-desk3.amr.corp.intel.com>
On Tue, 15 Mar 2022 21:13:52 -0700
Dan Williams <dan.j.williams@intel.com> wrote:
> Use a loop to reduce the duplicated code in cxl_map_device_regs(). This
> is in preparation for deleting cxl_map_regs().
>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Trivial style comments inline. Otherwise LGTM
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> drivers/cxl/core/regs.c | 51 ++++++++++++++++++-----------------------------
> 1 file changed, 20 insertions(+), 31 deletions(-)
>
> diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c
> index bd6ae14b679e..bd766e461f7d 100644
> --- a/drivers/cxl/core/regs.c
> +++ b/drivers/cxl/core/regs.c
> @@ -211,42 +211,31 @@ int cxl_map_device_regs(struct pci_dev *pdev,
> struct cxl_device_regs *regs,
> struct cxl_register_map *map)
> {
> + resource_size_t phys_addr =
> + pci_resource_start(pdev, map->barno) + map->block_offset;
I'm not totally convinced by this refactoring as it's ugly either
way... Still your code, and I don't care that strongly ;)
> struct device *dev = &pdev->dev;
> - resource_size_t phys_addr;
> -
> - phys_addr = pci_resource_start(pdev, map->barno);
> - phys_addr += map->block_offset;
> -
> - if (map->device_map.status.valid) {
> - resource_size_t addr;
> + struct mapinfo {
> + struct cxl_reg_map *rmap;
> + void __iomem **addr;
> + } mapinfo[] = {
> + { .rmap = &map->device_map.status, ®s->status, },
Combining c99 style .rmap for first parameter and then not doing it
for the second is a bit odd looking. Was there a strong reason for
doing this? I'd just drop the ".rmap =" as it's not as though
we need to look far to see what it's setting.
> + { .rmap = &map->device_map.mbox, ®s->mbox, },
> + { .rmap = &map->device_map.memdev, ®s->memdev, },
> + };
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(mapinfo); i++) {
> + struct mapinfo *mi = &mapinfo[i];
> resource_size_t length;
> -
> - addr = phys_addr + map->device_map.status.offset;
> - length = map->device_map.status.size;
> - regs->status = devm_cxl_iomap_block(dev, addr, length);
> - if (!regs->status)
> - return -ENOMEM;
> - }
> -
> - if (map->device_map.mbox.valid) {
> resource_size_t addr;
> - resource_size_t length;
>
> - addr = phys_addr + map->device_map.mbox.offset;
> - length = map->device_map.mbox.size;
> - regs->mbox = devm_cxl_iomap_block(dev, addr, length);
> - if (!regs->mbox)
> - return -ENOMEM;
> - }
> -
> - if (map->device_map.memdev.valid) {
> - resource_size_t addr;
> - resource_size_t length;
> + if (!mi->rmap->valid)
> + continue;
>
> - addr = phys_addr + map->device_map.memdev.offset;
> - length = map->device_map.memdev.size;
> - regs->memdev = devm_cxl_iomap_block(dev, addr, length);
> - if (!regs->memdev)
> + addr = phys_addr + mi->rmap->offset;
> + length = mi->rmap->size;
> + *(mi->addr) = devm_cxl_iomap_block(dev, addr, length);
> + if (!*(mi->addr))
> return -ENOMEM;
> }
>
>
next prev parent reply other threads:[~2022-03-17 10:08 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-16 4:13 [PATCH 0/8] cxl/pci: Add fundamental error handling Dan Williams
2022-03-16 4:13 ` [PATCH 1/8] cxl/pci: Cleanup repeated code in cxl_probe_regs() helpers Dan Williams
2022-03-17 10:02 ` Jonathan Cameron
2022-03-16 4:13 ` [PATCH 2/8] cxl/pci: Cleanup cxl_map_device_regs() Dan Williams
2022-03-17 10:07 ` Jonathan Cameron [this message]
2022-03-18 17:13 ` Dan Williams
2022-03-16 4:13 ` [PATCH 3/8] cxl/pci: Kill cxl_map_regs() Dan Williams
2022-03-17 10:09 ` Jonathan Cameron
2022-03-18 17:08 ` Dan Williams
2022-03-16 4:14 ` [PATCH 4/8] cxl/core/regs: Make cxl_map_{component, device}_regs() device generic Dan Williams
2022-03-17 10:25 ` Jonathan Cameron
2022-03-18 17:06 ` Dan Williams
2022-03-16 4:14 ` [PATCH 5/8] cxl/port: Limit the port driver to just the HDM Decoder Capability Dan Williams
2022-03-17 10:48 ` Jonathan Cameron
2022-03-16 4:14 ` [PATCH 6/8] cxl/pci: Prepare for mapping RAS Capability Structure Dan Williams
2022-03-17 10:56 ` Jonathan Cameron
2022-03-18 19:51 ` Dan Williams
2022-03-17 17:32 ` Ben Widawsky
2022-03-18 16:19 ` Dan Williams
2022-03-16 4:14 ` [PATCH 7/8] cxl/pci: Find and map the " Dan Williams
2022-03-17 15:10 ` Jonathan Cameron
2022-03-16 4:14 ` [PATCH 8/8] cxl/pci: Add (hopeful) error handling support Dan Williams
2022-03-17 15:16 ` Jonathan Cameron
2022-03-18 9:41 ` Shiju Jose
2022-04-24 22:15 ` Dan Williams
2022-03-16 4:23 ` [PATCH 0/8] cxl/pci: Add fundamental error handling Dan Williams
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220317100757.00005f2b@Huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=ben.widawsky@intel.com \
--cc=dan.j.williams@intel.com \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).