linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Max Gurtovoy <maxg@mellanox.com>
To: James Smart <james.smart@broadcom.com>,
	<linux-nvme@lists.infradead.org>,  <kbusch@kernel.org>,
	<hch@lst.de>, <sagi@grimberg.me>, <martin.petersen@oracle.com>
Cc: axboe@kernel.dk, vladimirk@mellanox.com, idanb@mellanox.com,
	israelr@mellanox.com, shlomin@mellanox.com, oren@mellanox.com
Subject: Re: [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support
Date: Thu, 23 Jan 2020 11:59:32 +0200	[thread overview]
Message-ID: <07d7772c-7ec5-d29f-ae84-c177321145ae@mellanox.com> (raw)
In-Reply-To: <522e0efe-5907-b28d-ef90-4ceca0fb3103@broadcom.com>


On 1/22/2020 11:57 PM, James Smart wrote:
>
>
> On 12/2/2019 6:48 AM, Max Gurtovoy wrote:
>> @@ -1215,34 +1283,115 @@ static int nvme_rdma_map_sg_single(struct 
>> nvme_rdma_queue *queue,
>>       return 0;
>>   }
>>   -static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
>> -        struct nvme_rdma_request *req, struct nvme_command *c,
>> -        int count)
>> +#ifdef CONFIG_BLK_DEV_INTEGRITY
>> +static void nvme_rdma_set_diff_domain(struct nvme_command *cmd, 
>> struct bio *bio,
>> +        struct ib_sig_domain *domain, struct request *rq)
>>   {
>> -    struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl;
>> -    int nr;
>> +    struct blk_integrity *bi = blk_get_integrity(bio->bi_disk);
>> +    struct nvme_ns *ns = rq->rq_disk->private_data;
>> +
>> +    WARN_ON(bi == NULL);
>>   -    req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs);
>> -    if (WARN_ON_ONCE(!req->mr))
>> -        return -EAGAIN;
>> +    domain->sig_type = IB_SIG_TYPE_T10_DIF;
>> +    domain->sig.dif.bg_type = IB_T10DIF_CRC;
>> +    domain->sig.dif.pi_interval = 1 << bi->interval_exp;
>> +    domain->sig.dif.ref_tag = le32_to_cpu(cmd->rw.reftag);
>>         /*
>> -     * Align the MR to a 4K page size to match the ctrl page size and
>> -     * the block virtual boundary.
>> +     * At the moment we hard code those, but in the future
>> +     * we will take them from cmd.
>>        */
>> -    nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, NULL,
>> -              SZ_4K);
>> -    if (unlikely(nr < count)) {
>> -        ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr);
>> -        req->mr = NULL;
>> -        if (nr < 0)
>> -            return nr;
>> -        return -EINVAL;
>> +    domain->sig.dif.apptag_check_mask = 0xffff;
>> +    domain->sig.dif.app_escape = true;
>> +    domain->sig.dif.ref_escape = true;
>> +    if (ns->pi_type != NVME_NS_DPS_PI_TYPE3)
>> +        domain->sig.dif.ref_remap = true;
>> +}
>> +
>>
>
> On a per-io basis, there needs to be specific descriptions of the DIF 
> information to program the port hardware.  Things such as block size, 
> type, and so. I see this routine using a mix of the bio that is 
> associated with the original request as well as the namespace pointer 
> to get this info.    To me the reaching into the bio, as well as the 
> locating of the ns structures are reaching into the other layers too 
> much.
>
> Wouldn't we be better off with with the core layer doing all the 
> reaching and setting up a pi structure in the nvme_request with this 
> information ?   replace has_pi with this pi struct and 
> "nvme_req(rq)->pi.pi_type == 0" is equivalent to has_pi ?   If we 
> didn't want to replicate the PI info, then nvme_request can simply add 
> a pointer to the ns, and the ns can be looked at explicitly to gather 
> the attributes.
>
> Thoughts ?
>
NVMe namespace is used by RDMA/TCP/FC transport drivers in each queue_rq 
implementation. We can pass it instead of reaching it from the rq, if 
this looks better.

NVMe PI attributes (e.g  reftag/check_flags/action_flags) are all set in 
the core layer.

PI attributes for configuring the HW of each transport should be done in 
the transport driver.


> -- james
>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-01-23 10:01 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-02 14:47 [PATCH 00/16 V2] nvme-rdma/nvmet-rdma: " Max Gurtovoy
2019-12-02 14:47 ` [PATCH] nvme-cli/fabrics: Add pi_enable param to connect cmd Max Gurtovoy
2019-12-02 14:47 ` [PATCH 01/16] nvme: Introduce namespace features flag Max Gurtovoy
2019-12-04  8:41   ` Christoph Hellwig
2019-12-02 14:47 ` [PATCH 02/16] nvme: Enforce extended LBA format for fabrics metadata Max Gurtovoy
2019-12-02 14:47 ` [PATCH 03/16] nvme: Introduce max_integrity_segments ctrl attribute Max Gurtovoy
2019-12-02 14:48 ` [PATCH 04/16] nvme-fabrics: Allow user enabling metadata/T10-PI support Max Gurtovoy
2019-12-02 14:48 ` [PATCH 05/16] nvme: Introduce NVME_INLINE_PROT_SG_CNT Max Gurtovoy
2019-12-02 14:48 ` [PATCH 06/16] nvme-rdma: Introduce nvme_rdma_sgl structure Max Gurtovoy
2019-12-02 14:48 ` [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support Max Gurtovoy
2020-01-22 21:57   ` James Smart
2020-01-23  9:59     ` Max Gurtovoy [this message]
2020-01-23 15:34       ` James Smart
2020-01-23 18:52         ` James Smart
2019-12-02 14:48 ` [PATCH 08/16] block: Add a BIP_MAPPED_INTEGRITY check to complete_fn Max Gurtovoy
2019-12-02 14:48 ` [PATCH 09/16] nvmet: Prepare metadata request Max Gurtovoy
2019-12-02 14:48 ` [PATCH 10/16] nvmet: Add metadata characteristics for a namespace Max Gurtovoy
2019-12-02 14:48 ` [PATCH 11/16] nvmet: Rename nvmet_rw_len to nvmet_rw_data_len Max Gurtovoy
2019-12-02 14:48 ` [PATCH 12/16] nvmet: Rename nvmet_check_data_len to nvmet_check_transfer_len Max Gurtovoy
2019-12-02 14:48 ` [PATCH 13/16] nvme: Add Metadata Capabilities enumerations Max Gurtovoy
2019-12-02 14:48 ` [PATCH 14/16] nvmet: Add metadata/T10-PI support Max Gurtovoy
2019-12-02 14:48 ` [PATCH 15/16] nvmet: Add metadata support for block devices Max Gurtovoy
2019-12-02 14:48 ` [PATCH 16/16] nvmet-rdma: Add metadata/T10-PI support Max Gurtovoy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=07d7772c-7ec5-d29f-ae84-c177321145ae@mellanox.com \
    --to=maxg@mellanox.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=idanb@mellanox.com \
    --cc=israelr@mellanox.com \
    --cc=james.smart@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=martin.petersen@oracle.com \
    --cc=oren@mellanox.com \
    --cc=sagi@grimberg.me \
    --cc=shlomin@mellanox.com \
    --cc=vladimirk@mellanox.com \
    --subject='Re: [PATCH 07/16] nvme-rdma: Add metadata/T10-PI support' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).