linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Niklas Cassel <Niklas.Cassel@wdc.com>
To: "Javier González" <javier@javigon.com>
Cc: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"hch@lst.de" <hch@lst.de>,
	"kbusch@kernel.org" <kbusch@kernel.org>,
	"sagi@grimberg.me" <sagi@grimberg.me>,
	"axboe@kernel.dk" <axboe@kernel.dk>,
	"joshi.k@samsung.com" <joshi.k@samsung.com>,
	"k.jensen@samsung.com" <k.jensen@samsung.com>
Subject: Re: [PATCH] nvme: report capacity 0 for non supported ZNS SSDs
Date: Mon, 2 Nov 2020 17:10:22 +0000	[thread overview]
Message-ID: <20201102171021.GB203505@localhost.localdomain> (raw)
In-Reply-To: <20201102130411.2vqqrma6zetec543@MacBook-Pro.localdomain>

On Mon, Nov 02, 2020 at 02:04:11PM +0100, Javier González wrote:
> On 30.10.2020 14:31, Niklas Cassel wrote:
> > On Thu, Oct 29, 2020 at 07:57:53PM +0100, Javier González wrote:
> > > Allow ZNS SSDs to be presented to the host even when they implement
> > > features that are not supported by the kernel zoned block device.
> > > 
> > > Instead of rejecting the SSD at the NVMe driver level, deal with this in
> > > the block layer by setting capacity to 0, as we do with other things
> > > such as unsupported PI configurations. This allows to use standard
> > > management tools such as nvme-cli to choose a different format or
> > > firmware slot that is compatible with the Linux zoned block device.
> > > 
> > > Signed-off-by: Javier González <javier.gonz@samsung.com>
> > > ---
> > >  drivers/nvme/host/core.c |  5 +++++
> > >  drivers/nvme/host/nvme.h |  1 +
> > >  drivers/nvme/host/zns.c  | 31 ++++++++++++++-----------------
> > >  3 files changed, 20 insertions(+), 17 deletions(-)
> > > 
> > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > index c190c56bf702..9ca4f0a6ff2c 100644
> > > --- a/drivers/nvme/host/core.c
> > > +++ b/drivers/nvme/host/core.c

(snip)

> > > @@ -44,20 +44,23 @@ int nvme_update_zone_info(struct gendisk *disk, struct nvme_ns *ns,
> > >  	struct nvme_id_ns_zns *id;
> > >  	int status;
> > > 
> > > -	/* Driver requires zone append support */
> > > +	ns->zone_sup = true;
> > 
> > I don't think it is wise to assign it to true here.
> > E.g. if kzalloc() failes, if nvme_submit_sync_cmd() fails,
> > or if nvme_set_max_append() fails, you have already set this to true,
> > but zoc or power of 2 checks were never performed.
> 
> I do not think it will matter much as it is just an internal variable.
> If any of the checks you mention fail, then the namespace will not even
> be initialized.
> 
> Is there anything I am missing?

We know that another function will perfom some operation (setting capacity
to 0), depending on ns->zone_sup. Therefore setting ns->zone_sup = true and
then later to false in the same function, introduces a theoretical race window.

IMHO, it just seems like a better coding practice to use a local variable,
so that the boolean is not true for a short while, for a ns that will be false
at the end of the function.

Kind regards,
Niklas

> 
> > Perhaps something like this would be more robust:
> > 
> > @@ -53,18 +53,19 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
> >        struct nvme_command c = { };
> >        struct nvme_id_ns_zns *id;
> >        int status;
> > +       bool new_ns_supp = true;
> > +
> > +       /* default to NS not supported */
> > +       ns->zoned_ns_supp = false;
> > 
> > -       /* Driver requires zone append support */
> >        if (!(le32_to_cpu(log->iocs[nvme_cmd_zone_append]) &
> >                        NVME_CMD_EFFECTS_CSUPP)) {
> >                dev_warn(ns->ctrl->device,
> >                        "append not supported for zoned namespace:%d\n",
> >                        ns->head->ns_id);
> > -               return -EINVAL;
> > -       }
> > -
> > -       /* Lazily query controller append limit for the first zoned namespace */
> > -       if (!ns->ctrl->max_zone_append) {
> > +               new_ns_supp = false;
> > +       } else if (!ns->ctrl->max_zone_append) {
> > +               /* Lazily query controller append limit for the first zoned namespace */
> >                status = nvme_set_max_append(ns->ctrl);
> >                if (status)
> >                        return status;
> > @@ -80,19 +81,16 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
> >        c.identify.csi = NVME_CSI_ZNS;
> > 
> >        status = nvme_submit_sync_cmd(ns->ctrl->admin_q, &c, id, sizeof(*id));
> > -       if (status)
> > -               goto free_data;
> > +       if (status) {
> > +               kfree(id);
> > +               return status;
> > +       }
> > 
> > -       /*
> > -        * We currently do not handle devices requiring any of the zoned
> > -        * operation characteristics.
> > -        */
> >        if (id->zoc) {
> >                dev_warn(ns->ctrl->device,
> >                        "zone operations:%x not supported for namespace:%u\n",
> >                        le16_to_cpu(id->zoc), ns->head->ns_id);
> > -               status = -EINVAL;
> > -               goto free_data;
> > +               new_ns_supp = false;
> >        }
> > 
> >        ns->zsze = nvme_lba_to_sect(ns, le64_to_cpu(id->lbafe[lbaf].zsze));
> > @@ -100,17 +98,14 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
> >                dev_warn(ns->ctrl->device,
> >                        "invalid zone size:%llu for namespace:%u\n",
> >                        ns->zsze, ns->head->ns_id);
> > -               status = -EINVAL;
> > -               goto free_data;
> > +               new_ns_supp = false;
> >        }
> > 
> > +       ns->zoned_ns_supp = new_ns_supp;
> >        q->limits.zoned = BLK_ZONED_HM;
> >        blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
> >        blk_queue_max_open_zones(q, le32_to_cpu(id->mor) + 1);
> >        blk_queue_max_active_zones(q, le32_to_cpu(id->mar) + 1);
> > -free_data:
> > -       kfree(id);
> > -       return status;
> > }
> > 
> > static void *nvme_zns_alloc_report_buffer(struct nvme_ns *ns,
> > 
> 
> Sure, we can use a local assignment as you suggest. I'll send a V2 with
> this.
> 
> Javier
> 

      reply	other threads:[~2020-11-02 17:10 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-29 18:57 [PATCH] nvme: report capacity 0 for non supported ZNS SSDs Javier González
2020-10-30 14:31 ` Niklas Cassel
2020-11-02 13:04   ` Javier González
2020-11-02 17:10     ` Niklas Cassel [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201102171021.GB203505@localhost.localdomain \
    --to=niklas.cassel@wdc.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=javier@javigon.com \
    --cc=joshi.k@samsung.com \
    --cc=k.jensen@samsung.com \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).