linux-cxl.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Ben Widawsky <ben.widawsky@intel.com>,
	linux-cxl@vger.kernel.org, Ira Weiny <ira.weiny@intel.com>,
	Alison Schofield <alison.schofield@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>
Subject: Re: [PATCH 3/3] cxl/pci: Retain map information in cxl_mem_probe
Date: Mon, 2 Aug 2021 10:09:45 -0700	[thread overview]
Message-ID: <CAPcyv4iWmZtcGPDNsFebL=kZNQ+GvSEAqfR77gM9Z5bojTEqDQ@mail.gmail.com> (raw)
In-Reply-To: <CAPcyv4jQ=jSOzcyt=0JCn_sSBQEo+23taG4O-QPU9xwTjUKw5w@mail.gmail.com>

On Mon, Aug 2, 2021 at 9:10 AM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Mon, Aug 2, 2021 at 8:57 AM Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Fri, 16 Jul 2021 16:15:48 -0700
> > Ben Widawsky <ben.widawsky@intel.com> wrote:
> >
> > > In order for a memdev to participate in cxl_core's port APIs, the
> > > physical address of the memdev's component registers is needed. This is
> > > accomplished by allocating the array of maps in probe so they can be
> > > used after the memdev is created.
> > >
> > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> >
> > Hmm. I don't entirely like the the passing of an array of
> > unknown size into cxl_mem_setup_regs.  It is perhaps paranoid
> > but I'd separately pass in the size and error out should we
> > overflow with a suitable message to highlight the bug.
>
> Agree.

Here's the incremental diff I came up with:

diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index c370ab2e48bc..ff72286142e7 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -1086,7 +1086,8 @@ static void cxl_decode_register_block(u32
reg_lo, u32 reg_hi,
  * regions. The purpose of this function is to enumerate and map those
  * registers.
  */
-static int cxl_mem_setup_regs(struct cxl_mem *cxlm, struct
cxl_register_map maps[])
+static int cxl_mem_setup_regs(struct cxl_mem *cxlm,
+                             struct cxl_register_maps *maps)
 {
        struct pci_dev *pdev = cxlm->pdev;
        struct device *dev = &pdev->dev;
@@ -1135,7 +1136,9 @@ static int cxl_mem_setup_regs(struct cxl_mem
*cxlm, struct cxl_register_map maps
                if (!base)
                        return -ENOMEM;

-               map = &maps[n_maps];
+               if (n_maps > ARRAY_SIZE(maps->map))
+                       return -ENXIO;
+               map = &maps->map[n_maps++];
                map->barno = bar;
                map->block_offset = offset;
                map->reg_type = reg_type;
@@ -1147,14 +1150,12 @@ static int cxl_mem_setup_regs(struct cxl_mem
*cxlm, struct cxl_register_map maps

                if (ret)
                        return ret;
-
-               n_maps++;
        }

        pci_release_mem_regions(pdev);

        for (i = 0; i < n_maps; i++) {
-               ret = cxl_map_regs(cxlm, &maps[i]);
+               ret = cxl_map_regs(cxlm, &maps->map[i]);
                if (ret)
                        break;
        }
@@ -1370,7 +1371,7 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)

 static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
-       struct cxl_register_map maps[CXL_REGLOC_RBI_TYPES];
+       struct cxl_register_maps maps;
        struct cxl_memdev *cxlmd;
        struct cxl_mem *cxlm;
        int rc;
@@ -1383,7 +1384,7 @@ static int cxl_mem_probe(struct pci_dev *pdev,
const struct pci_device_id *id)
        if (IS_ERR(cxlm))
                return PTR_ERR(cxlm);

-       rc = cxl_mem_setup_regs(cxlm, maps);
+       rc = cxl_mem_setup_regs(cxlm, &maps);
        if (rc)
                return rc;

diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
index 8c1a58813816..5b7828003b76 100644
--- a/drivers/cxl/pci.h
+++ b/drivers/cxl/pci.h
@@ -2,6 +2,7 @@
 /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
 #ifndef __CXL_PCI_H__
 #define __CXL_PCI_H__
+#include "cxlmem.h"

 #define CXL_MEMORY_PROGIF      0x10

@@ -29,4 +30,8 @@

 #define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)

+struct cxl_register_maps {
+       struct cxl_register_map map[CXL_REGLOC_RBI_TYPES];
+};
+
 #endif /* __CXL_PCI_H__ */

  reply	other threads:[~2021-08-02 17:09 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-16 23:15 [PATCH 0/3] Rework register enumeration for later reuse Ben Widawsky
2021-07-16 23:15 ` [PATCH 1/3] cxl/pci: Ignore unknown register block types Ben Widawsky
2021-08-02 15:49   ` Jonathan Cameron
2021-07-16 23:15 ` [PATCH 2/3] cxl/pci: Simplify register setup Ben Widawsky
2021-08-02 15:50   ` Jonathan Cameron
2021-07-16 23:15 ` [PATCH 3/3] cxl/pci: Retain map information in cxl_mem_probe Ben Widawsky
2021-08-02 15:56   ` Jonathan Cameron
2021-08-02 16:10     ` Dan Williams
2021-08-02 17:09       ` Dan Williams [this message]
2021-08-03  7:58         ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPcyv4iWmZtcGPDNsFebL=kZNQ+GvSEAqfR77gM9Z5bojTEqDQ@mail.gmail.com' \
    --to=dan.j.williams@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=ben.widawsky@intel.com \
    --cc=ira.weiny@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).