All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Max Gurtovoy <maxg@mellanox.com>, linux-nvme@lists.infradead.org
Cc: linux-block@vger.kernel.org, netdev@vger.kernel.org,
	Christoph Hellwig <hch@lst.de>,
	Keith Busch <keith.busch@intel.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH v4 11/13] nvmet-tcp: add NVMe over TCP target driver
Date: Thu, 29 Nov 2018 17:22:43 -0800	[thread overview]
Message-ID: <9068fee7-e299-afe1-3d29-22c2448379c7@grimberg.me> (raw)
In-Reply-To: <4ad0ce00-46a6-9aff-44ba-6c4374a9386e@mellanox.com>


>> +static inline void nvmet_tcp_put_cmd(struct nvmet_tcp_cmd *cmd)
>> +{
>> +    if (unlikely(cmd == &cmd->queue->connect))
>> +        return;
> 
> if you don't return connect cmd to the list please don't add it to it in 
> the first place (during alloc_cmd). and if you use it once, we might 
> think of a cleaner/readable way to do it.
> 
> why there is a difference between regular cmd and connect_cmd ? can't 
> you increase the nr_cmds by 1 and not distinguish between the two types ?

We don't have the queue size before we connect, hence we allocate a
single command for nvmf connect and when we process it we allocate the
rest.

Its done this way such that the command processing code does not
differentiate this case with a dedicated condition, hence I prefer to
keep it in the command list for the first pass such that we don't have
this check in the normal command processing code.

The reason it doesn't go back to the list is because it does not
belong to the cmds array which we rely on to lookup our command context
based on the ttag.

>> +static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port,
>> +        struct socket *newsock)
>> +{
>> +    struct nvmet_tcp_queue *queue;
>> +    int ret;
>> +
>> +    queue = kzalloc(sizeof(*queue), GFP_KERNEL);
>> +    if (!queue)
>> +        return -ENOMEM;
>> +
>> +    INIT_WORK(&queue->release_work, nvmet_tcp_release_queue_work);
>> +    INIT_WORK(&queue->io_work, nvmet_tcp_io_work);
>> +    queue->sock = newsock;
>> +    queue->port = port;
>> +    queue->nr_cmds = 0;
>> +    spin_lock_init(&queue->state_lock);
>> +    queue->state = NVMET_TCP_Q_CONNECTING;
>> +    INIT_LIST_HEAD(&queue->free_list);
>> +    init_llist_head(&queue->resp_list);
>> +    INIT_LIST_HEAD(&queue->resp_send_list);
>> +
>> +    queue->idx = ida_simple_get(&nvmet_tcp_queue_ida, 0, 0, GFP_KERNEL);
>> +    if (queue->idx < 0) {
>> +        ret = queue->idx;
>> +        goto out_free_queue;
>> +    }
>> +
>> +    ret = nvmet_tcp_alloc_cmd(queue, &queue->connect);
>> +    if (ret)
>> +        goto out_ida_remove;
>> +
>> +    ret = nvmet_sq_init(&queue->nvme_sq);
>> +    if (ret)
>> +        goto out_ida_remove;
> 
> please add a goto free_connect_cmd:
> 
> nvmet_tcp_free_cmd(&queue->connect);
> 
> to avoid memory leak in error flow.

Will do, thanks.

WARNING: multiple messages have this Message-ID (diff)
From: sagi@grimberg.me (Sagi Grimberg)
Subject: [PATCH v4 11/13] nvmet-tcp: add NVMe over TCP target driver
Date: Thu, 29 Nov 2018 17:22:43 -0800	[thread overview]
Message-ID: <9068fee7-e299-afe1-3d29-22c2448379c7@grimberg.me> (raw)
In-Reply-To: <4ad0ce00-46a6-9aff-44ba-6c4374a9386e@mellanox.com>


>> +static inline void nvmet_tcp_put_cmd(struct nvmet_tcp_cmd *cmd)
>> +{
>> +??? if (unlikely(cmd == &cmd->queue->connect))
>> +??????? return;
> 
> if you don't return connect cmd to the list please don't add it to it in 
> the first place (during alloc_cmd). and if you use it once, we might 
> think of a cleaner/readable way to do it.
> 
> why there is a difference between regular cmd and connect_cmd ? can't 
> you increase the nr_cmds by 1 and not distinguish between the two types ?

We don't have the queue size before we connect, hence we allocate a
single command for nvmf connect and when we process it we allocate the
rest.

Its done this way such that the command processing code does not
differentiate this case with a dedicated condition, hence I prefer to
keep it in the command list for the first pass such that we don't have
this check in the normal command processing code.

The reason it doesn't go back to the list is because it does not
belong to the cmds array which we rely on to lookup our command context
based on the ttag.

>> +static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port,
>> +??????? struct socket *newsock)
>> +{
>> +??? struct nvmet_tcp_queue *queue;
>> +??? int ret;
>> +
>> +??? queue = kzalloc(sizeof(*queue), GFP_KERNEL);
>> +??? if (!queue)
>> +??????? return -ENOMEM;
>> +
>> +??? INIT_WORK(&queue->release_work, nvmet_tcp_release_queue_work);
>> +??? INIT_WORK(&queue->io_work, nvmet_tcp_io_work);
>> +??? queue->sock = newsock;
>> +??? queue->port = port;
>> +??? queue->nr_cmds = 0;
>> +??? spin_lock_init(&queue->state_lock);
>> +??? queue->state = NVMET_TCP_Q_CONNECTING;
>> +??? INIT_LIST_HEAD(&queue->free_list);
>> +??? init_llist_head(&queue->resp_list);
>> +??? INIT_LIST_HEAD(&queue->resp_send_list);
>> +
>> +??? queue->idx = ida_simple_get(&nvmet_tcp_queue_ida, 0, 0, GFP_KERNEL);
>> +??? if (queue->idx < 0) {
>> +??????? ret = queue->idx;
>> +??????? goto out_free_queue;
>> +??? }
>> +
>> +??? ret = nvmet_tcp_alloc_cmd(queue, &queue->connect);
>> +??? if (ret)
>> +??????? goto out_ida_remove;
>> +
>> +??? ret = nvmet_sq_init(&queue->nvme_sq);
>> +??? if (ret)
>> +??????? goto out_ida_remove;
> 
> please add a goto free_connect_cmd:
> 
> nvmet_tcp_free_cmd(&queue->connect);
> 
> to avoid memory leak in error flow.

Will do, thanks.

  reply	other threads:[~2018-11-30  1:22 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-27 23:16 [PATCH v4 00/13] TCP transport binding for NVMe over Fabrics Sagi Grimberg
2018-11-27 23:16 ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 01/13] ath6kl: add ath6kl_ prefix to crypto_type Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 02/13] datagram: open-code copy_page_to_iter Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 03/13] iov_iter: pass void csum pointer to csum_and_copy_to_iter Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 04/13] datagram: consolidate datagram copy to iter helpers Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 05/13] iov_iter: introduce hash_and_copy_to_iter helper Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 06/13] datagram: introduce skb_copy_and_hash_datagram_iter helper Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 07/13] nvmet: Add install_queue callout Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 08/13] nvme-fabrics: allow user passing header digest Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 09/13] nvme-fabrics: allow user passing data digest Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 10/13] nvme-tcp: Add protocol header Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 11/13] nvmet-tcp: add NVMe over TCP target driver Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-29  0:16   ` Max Gurtovoy
2018-11-29  0:16     ` Max Gurtovoy
2018-11-30  1:22     ` Sagi Grimberg [this message]
2018-11-30  1:22       ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 12/13] nvmet: allow configfs tcp trtype configuration Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-27 23:16 ` [PATCH v4 13/13] nvme-tcp: add NVMe over TCP host driver Sagi Grimberg
2018-11-27 23:16   ` Sagi Grimberg
2018-11-28  7:01 ` [PATCH v4 00/13] TCP transport binding for NVMe over Fabrics Christoph Hellwig
2018-11-28  7:01   ` Christoph Hellwig
2018-11-30  1:24   ` Sagi Grimberg
2018-11-30  1:24     ` Sagi Grimberg
2018-11-30  2:14     ` David Miller
2018-11-30  2:14       ` David Miller
2018-11-30 20:37       ` Sagi Grimberg
2018-11-30 20:37         ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9068fee7-e299-afe1-3d29-22c2448379c7@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=davem@davemloft.net \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=maxg@mellanox.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.