All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] Regarding NVMe driver command queue depth.
@ 2017-11-16 19:25 Sreeni Busam
  0 siblings, 0 replies; 9+ messages in thread
From: Sreeni Busam @ 2017-11-16 19:25 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 987 bytes --]

Hi Paul,

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.
"The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128"
When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?
Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?
Please give some detail about the parameter.

Thanks,
Sreeni

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 3191 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Regarding NVMe driver command queue depth.
@ 2017-11-20 20:50 Sreeni Busam
  0 siblings, 0 replies; 9+ messages in thread
From: Sreeni Busam @ 2017-11-20 20:50 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5945 bytes --]

I have attached program

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Friday, November 17, 2017 11:02 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Cool, so working as you would expect now?

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Friday, November 17, 2017 11:52 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Hi Jim,

Thanks for taking a look at the problem. It is a logical bug. I fixed it.
The memory for the qpair was allocated outside the ns_entry loop.
I have two SSD devices and the memory was not allocated for the qpair for second ns_entry.
Here is the problem.
ns_entry = g_namespaces;
     ns_entry->qpair_2 = spdk_nvme_ctrlr_alloc_io_qpair(ns_entry->ctrlr, NULL, 0);
     if (ns_entry->qpair_2 == NULL) {
           printf("The qpair allocation failed.\n");
           exit (0);
     }
     while (ns_entry != NULL) {
           cnt = 1;
           while (cnt) {
           // Fails to be successfully submit
                rc = stellus_spdk_nvme_ns_cmd_write(ns_entry, ns_entry->qpair_2, sequence->buf,
                                     0, /* LBA start */
                                     1, /* number of LBAs */
                                     io_complete, sequence, 0)
           ….


Sreeni

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
Sent: Friday, November 17, 2017 8:44 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Hi Sreeni,

Can you step through your second call to spdk_nvme_ctrlr_alloc_io_qpair?  The callstack clearly shows that qpair=0x0 was passed into stellus_spdk_nvme_ns_cmd_write() at frame #3.  So I think we should back up and figure out why no I/O qpair was allocated (or maybe it was allocated but not saved in a structure or something).

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of "Sreeni (Sreenivasa) Busam (Stellus)" <s.busam(a)stellus.com<mailto:s.busam(a)stellus.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, November 16, 2017 at 4:57 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

I have been trying to test the number of commands that can be given to the device at a time. I verified that a maximum of 254 commands could be issued for a qpair. So I created a 2nd qpair for ns_entry and issued the I/O commands, it was failing in the first command itself. Is it invalid to create 2 qpair for the same ns_entry and send command to device? The qpair is successfully created, but I could not submit command.
I modified the hello_world program to test this and attached the related code.
Please take a look and let me know what is the problem.

0x000000000040bae2 in nvme_allocate_request (qpair=0x0,
    payload=0x7fffa4726ba0, payload_size=512, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270) at nvme.c:85
#1  0x000000000040996c in _nvme_ns_cmd_rw (ns=0x100ff8ee40, qpair=0x0,
    payload=0x7fffa4726ba0, payload_offset=0, md_offset=0, lba=0, lba_count=1,
    cb_fn=0x4041a4 <write_complete>, cb_arg=0x7b4270, opc=1, io_flags=0,
    apptag_mask=0, apptag=0, check_sgl=true) at nvme_ns_cmd.c:440
#2  0x0000000000409fea in spdk_nvme_ns_cmd_write (ns=0x100ff8ee40, qpair=0x0,
    buffer=0x10000f7000, lba=0, lba_count=1, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270, io_flags=0) at nvme_ns_cmd.c:649
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
#4  0x00000000004046b8 in test_io_func1 () at iostat.c:342
#5  0x0000000000404a94 in main (argc=1, argv=0x7fffa4726db8) at iostat.c:503
(gdb) f 3
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
233                     rc = spdk_nvme_ns_cmd_write(ns_entry->ns, qpair, buffer,
(gdb) p qpair
$1 = (struct spdk_nvme_qpair *) 0x0

If any of you get time, please look at it. Thank you for your suggestion.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 11:26 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] Regarding NVMe driver command queue depth.

Hi Paul,

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.
“The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128”
When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?
Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?
Please give some detail about the parameter.

Thanks,
Sreeni

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 18880 bytes --]

[-- Attachment #3: hello_world_t_1.c --]
[-- Type: text/plain, Size: 17660 bytes --]

/*-
 *   BSD LICENSE
 *
 *   Copyright (c) Intel Corporation.
 *   All rights reserved.
 *
 *   Redistribution and use in source and binary forms, with or without
 *   modification, are permitted provided that the following conditions
 *   are met:
 *
 *     * Redistributions of source code must retain the above copyright
 *       notice, this list of conditions and the following disclaimer.
 *     * Redistributions in binary form must reproduce the above copyright
 *       notice, this list of conditions and the following disclaimer in
 *       the documentation and/or other materials provided with the
 *       distribution.
 *     * Neither the name of Intel Corporation nor the names of its
 *       contributors may be used to endorse or promote products derived
 *       from this software without specific prior written permission.
 *
 *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
 *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
 *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
 *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
 *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
 *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
 *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
 *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

#include "spdk/stdinc.h"

#include "spdk/nvme.h"
#include "spdk/env.h"

#include <pthread.h>

struct ctrlr_entry {
	struct spdk_nvme_ctrlr	*ctrlr;
	struct ctrlr_entry	*next;
	char			name[1024];
};

#define		MAX_CMDS	20
#define 	MAX_CTRLR_NAME	1024
struct ns_entry {
	struct spdk_nvme_ctrlr	*ctrlr;
	struct spdk_nvme_ns	*ns;
	int nsid;
	char ctrlr_name[MAX_CTRLR_NAME];
	struct ns_entry		*next;
	struct spdk_nvme_qpair	*qpair[5];
	struct spdk_nvme_qpair	*qpair_2;
	int qpair_allocated;
};

int
stellus_spdk_nvme_ns_cmd_read(struct ns_entry *ns_entry, struct spdk_nvme_qpair *qpair, void *buffer, uint64_t lba, uint32_t lba_count, spdk_nvme_cmd_cb cb_fn, void *cb_arg, uint32_t ioflags);
int
stellus_spdk_nvme_ns_cmd_write(struct ns_entry *ns_entry, struct spdk_nvme_qpair *qpair, void *buffer, uint64_t lba, uint32_t lba_count, spdk_nvme_cmd_cb cb_fn, void *cb_arg, uint32_t ioflags);
int
stellus_qpair_process_completions(struct spdk_nvme_qpair *qpair, int num);
int
stellus_qpair_free(struct spdk_nvme_qpair *qpair);
extern void shm_init();
static void
read_complete(void *arg, const struct spdk_nvme_cpl *completion);
static void
write_complete(void *arg, const struct spdk_nvme_cpl *completion);
void
test_io_func1(void *arg);
spdk_nvme_cmd_cb compl_cbs[MAX_CMDS] = {
	NULL,
	read_complete,
	write_complete,
};

struct ctrlr_entry *g_controllers = NULL;
struct ns_entry *g_namespaces = NULL;

static void
register_ns(struct spdk_nvme_ctrlr *ctrlr, struct spdk_nvme_ns *ns, int nsid,
				char *name)
{
	struct ns_entry *entry;
	const struct spdk_nvme_ctrlr_data *cdata;

	/*
	 * spdk_nvme_ctrlr is the logical abstraction in SPDK for an NVMe
	 *  controller.  During initialization, the IDENTIFY data for the
	 *  controller is read using an NVMe admin command, and that data
	 *  can be retrieved using spdk_nvme_ctrlr_get_data() to get
	 *  detailed information on the controller.  Refer to the NVMe
	 *  specification for more details on IDENTIFY for NVMe controllers.
	 */
	cdata = spdk_nvme_ctrlr_get_data(ctrlr);

	if (!spdk_nvme_ns_is_active(ns)) {
		printf("Controller %-20.20s (%-20.20s): Skipping inactive NS %u\n",
		       cdata->mn, cdata->sn,
		       spdk_nvme_ns_get_id(ns));
		return;
	}

	entry = calloc(1, sizeof(struct ns_entry));
	if (entry == NULL) {
		perror("ns_entry malloc");
		exit(1);
	}

	entry->ctrlr = ctrlr;
	entry->nsid = nsid;
	strncpy(entry->ctrlr_name, name, MAX_CTRLR_NAME-1);
	entry->ns = ns;
	entry->next = g_namespaces;
	g_namespaces = entry;

	printf("  Namespace ID: %d size: %juGB\n", spdk_nvme_ns_get_id(ns),
	       spdk_nvme_ns_get_size(ns) / 1000000000);
}

struct io_sequence {
	struct ns_entry	*ns_entry;
	char		*buf;
	int		is_completed;
	int cmd;
	spdk_nvme_cmd_cb cmdcb;
	int io_count;
};

static void
read_complete(void *arg, const struct spdk_nvme_cpl *completion)
{
	struct io_sequence *sequence = arg;

	/*
	 * The read I/O has completed.  Print the contents of the
	 *  buffer, free the buffer, then mark the sequence as
	 *  completed.  This will trigger the hello_world() function
	 *  to exit its polling loop.
	 */
	printf("%s", sequence->buf);
	sequence->io_count++;
	if (sequence->io_count == 5) {
		//spdk_dma_free(sequence->buf);
	}
	//printf("A read operation has successfully completed read count [%d].\n", sequence->io_count);
	sequence->is_completed = 1;
	sequence->cmdcb(arg, completion);
	/*
	 * check for errors.
	 */
	if (sequence->io_count == 5) {
		//free(sequence);
		//printf("The sequence struct has been freed.\n");
	}
}

static void
write_complete(void *arg, const struct spdk_nvme_cpl *completion)
{
	struct io_sequence	*sequence = (struct io_sequence *) arg;
	struct ns_entry			*ns_entry = sequence->ns_entry;

	/*
	 * The write I/O has completed.  Free the buffer associated with
	 *  the write I/O and allocate a new zeroed buffer for reading
	 *  the data back from the NVMe namespace.
	 */
	sequence->io_count++;
	if (sequence->io_count == 5) {
		//spdk_dma_free(sequence->buf);
	}
	//printf("A write operation has successfully completed write count [%d].\n", sequence->io_count);
	sequence->cmdcb(arg, completion);
	if (sequence->io_count == 5) {
		//free(sequence);
		//printf("The sequence struct has been freed.\n");
	}
}

static void
io_complete(void *arg, const struct spdk_nvme_cpl *completion)
{
	struct io_sequence  *seq_arg = (struct io_sequence *)arg;
	int type = seq_arg->cmd;

#if 0
	switch (type) {
		case SPDK_NVME_OPC_READ:
		{
			printf("command to NVMe device has completed, op is [%d].\n", type);
			break;
		}
		case SPDK_NVME_OPC_WRITE:
		{
			printf("command to NVMe device has completed, op is [%d].\n", type);
			break;
		}
		default:
			printf("Unknown NVMe command type = [%d].", type);
			break;
	}
#endif
}


int
stellus_spdk_nvme_ns_cmd_read(struct ns_entry *ns_entry, struct spdk_nvme_qpair *qpair, void *buffer, uint64_t lba, uint32_t lba_count, spdk_nvme_cmd_cb cb_fn, void *cb_arg, uint32_t ioflags)
{
	int rc;
	struct io_sequence *sequence = (struct io_sequence *)cb_arg;
	sequence->cmd = SPDK_NVME_OPC_READ;
	sequence->cmdcb = cb_fn;
	rc = spdk_nvme_ns_cmd_read(ns_entry->ns, qpair, buffer,
				   lba, /* LBA start */
				   lba_count, /* number of LBAs */
				   read_complete, sequence, 0);
	if (rc != 0) {
		fprintf(stderr, "starting read I/O failed\n");
		exit(1);
	}
}

int
stellus_spdk_nvme_ns_cmd_write(struct ns_entry *ns_entry, struct spdk_nvme_qpair *qpair, void *buffer, uint64_t lba, uint32_t lba_count, spdk_nvme_cmd_cb cb_fn, void *cb_arg, uint32_t ioflags)
{
	int rc;
	struct io_sequence *sequence = (struct io_sequence *)cb_arg;
		/*
		 * Write the data buffer to LBA 0 of this namespace.  "write_complete" and
		 *  "&sequence" are specified as the completion callback function and
		 *  argument respectively.  write_complete() will be called with the
		 *  value of &sequence as a parameter when the write I/O is completed.
		 *  This allows users to potentially specify different completion
		 *  callback routines for each I/O, as well as pass a unique handle
		 *  as an argument so the application knows which I/O has completed.
		 *
		 * Note that the SPDK NVMe driver will only check for completions
		 *  when the application calls spdk_nvme_qpair_process_completions().
		 *  It is the responsibility of the application to trigger the polling
		 *  process.
		 */
		sequence->cmd = SPDK_NVME_OPC_WRITE;
		sequence->cmdcb = cb_fn;
		printf("value of ns_entry qpair [%p]\n", qpair);
		rc = spdk_nvme_ns_cmd_write(ns_entry->ns, qpair, buffer,
					    lba, /* LBA start */
					    lba_count, /* number of LBAs */
					    write_complete, sequence, 0);
		if (rc != 0) {
			fprintf(stderr, "starting write I/O failed\n");
			exit(1);
		}
		return (0);
}

void
test_io_func1(void *arg)
{
	struct ns_entry			*ns_entry;
	struct io_sequence	*sequence;
	struct io_sequence	*sequence1;
	int 	t_id;
	int				rc;
	int cnt = 5;

	ns_entry = g_namespaces;
	t_id = *(int *)arg - 1;
	printf("starting I/O run on new thread id [%d]\n", *(int *)arg);
	printf("value of devices's ns_entry [%p]\n", ns_entry);
	while (ns_entry != NULL) {
		/*
		 * Allocate an I/O qpair that we can use to submit read/write requests
		 *  to namespaces on the controller.  NVMe controllers typically support
		 *  many qpairs per controller.  Any I/O qpair allocated for a controller
		 *  can submit I/O to any namespace on that controller.
		 *
		 * The SPDK NVMe driver provides no synchronization for qpair accesses -
		 *  the application must ensure only a single thread submits I/O to a
		 *  qpair, and that same thread must also check for completions on that
		 *  qpair.  This enables extremely efficient I/O processing by making all
		 *  I/O operations completely lockless.
		 */
		ns_entry->qpair[t_id] = spdk_nvme_ctrlr_alloc_io_qpair(ns_entry->ctrlr, NULL, 0);
		printf("value of ns_entry qpair [%p]\n", ns_entry->qpair[t_id]);
		//printf("value of ns_entry qpair [%p]\n", &ns_entry->qpair[0]);
		if (ns_entry->qpair[t_id] == NULL) {
			printf("ERROR: spdk_nvme_ctrlr_alloc_io_qpair() failed\n");
			return;
		}

		/*
		 * Use spdk_dma_zmalloc to allocate a 4KB zeroed buffer.  This memory
		 * will be pinned, which is required for data buffers used for SPDK NVMe
		 * I/O operations.
		 */
		sequence = calloc(1, sizeof(struct io_sequence));
			sequence->buf = spdk_dma_zmalloc(0x1000, 0x1000, NULL);
			sequence->is_completed = 0;
			sequence->ns_entry = ns_entry;
			ns_entry->qpair_allocated = 1;

		sequence1 = calloc(1, sizeof(struct io_sequence));
			sequence1->buf = spdk_dma_zmalloc(0x1000, 0x1000, NULL);
			sequence1->is_completed = 0;
			sequence1->ns_entry = ns_entry;
		/*
		 * Print "Hello world!" to sequence.buf.  We will write this data to LBA
		 *  0 on the namespace, and then later read it back into a separate buffer
		 *  to demonstrate the full I/O path.
		 */
		snprintf(sequence->buf, 0x1000, "%s", "Hello world!\n");

		/*
		 * Write the data buffer to LBA 0 of this namespace.  "write_complete" and
		 *  "&sequence" are specified as the completion callback function and
		 *  argument respectively.  write_complete() will be called with the
		 *  value of &sequence as a parameter when the write I/O is completed.
		 *  This allows users to potentially specify different completion
		 *  callback routines for each I/O, as well as pass a unique handle
		 *  as an argument so the application knows which I/O has completed.
		 *
		 * Note that the SPDK NVMe driver will only check for completions
		 *  when the application calls spdk_nvme_qpair_process_completions().
		 *  It is the responsibility of the application to trigger the polling
		 *  process.
		 */
		cnt = 64;
		while (cnt) {
			rc = stellus_spdk_nvme_ns_cmd_write(ns_entry, ns_entry->qpair[t_id], sequence->buf,
						    0, /* LBA start */
						    1, /* number of LBAs */
						    io_complete, sequence, 0);
			if (rc != 0) {
				fprintf(stderr, "starting write I/O failed\n");
				exit(1);
			}
			rc = stellus_spdk_nvme_ns_cmd_read(ns_entry, ns_entry->qpair[t_id], sequence1->buf,
						    0, /* LBA start */
						    1, /* number of LBAs */
						    io_complete, sequence1, 0);
			if (rc != 0) {
				fprintf(stderr, "starting read I/O failed\n");
				exit(1);
			}
			cnt--;
		}
	
		spdk_nvme_qpair_process_completions(ns_entry->qpair[t_id], 0);
		spdk_nvme_ctrlr_free_io_qpair(ns_entry->qpair[t_id]);
		ns_entry = ns_entry->next;
	}
#if 0
	ns_entry = g_namespaces;
	while (ns_entry != NULL) {
		ns_entry->qpair = spdk_nvme_ctrlr_alloc_io_qpair(ns_entry->ctrlr, NULL, 0);
		cnt = 68;
		while (cnt) {
			rc = stellus_spdk_nvme_ns_cmd_write(ns_entry, ns_entry->qpair, sequence->buf,
						    0, /* LBA start */
						    1, /* number of LBAs */
						    io_complete, sequence, 0);
			if (rc != 0) {
				fprintf(stderr, "starting write I/O failed\n");
				exit(1);
			}
			rc = stellus_spdk_nvme_ns_cmd_read(ns_entry, ns_entry->qpair, sequence1->buf,
						    0, /* LBA start */
						    1, /* number of LBAs */
						    io_complete, sequence1, 0);
			if (rc != 0) {
				fprintf(stderr, "starting read I/O failed\n");
				exit(1);
			}
			cnt--;
		}
		stellus_qpair_process_completions(ns_entry->qpair, 0);
		stellus_qpair_free(ns_entry->qpair);
		ns_entry = ns_entry->next;
	}
	free(sequence->buf);
	free(sequence);
	free(sequence1->buf);
	free(sequence1);
#endif

}
/*
 * Poll for completions.  0 here means process all available completions.
 *  In certain usage models, the caller may specify a positive integer
 *  instead of 0 to signify the maximum number of completions it should
 *  process.  This function will never block - if there are no
 *  completions pending on the specified qpair, it will return immediately.
 */
int
stellus_qpair_process_completions(struct spdk_nvme_qpair *qpair, int num)
{
		spdk_nvme_qpair_process_completions(qpair, num);
		return (0);
}

/*
 * Free the I/O qpair.  This typically is done when an application exits.
 *  But SPDK does support freeing and then reallocating qpairs during
 *  operation.  It is the responsibility of the caller to ensure all
 *  pending I/O are completed before trying to free the qpair.
 */
int
stellus_qpair_free(struct spdk_nvme_qpair *qpair)
{
		spdk_nvme_ctrlr_free_io_qpair(qpair);
		return (0);
}

static bool
probe_cb(void *cb_ctx, const struct spdk_nvme_transport_id *trid,
	 struct spdk_nvme_ctrlr_opts *opts)
{
	printf("Attaching to %s\n", trid->traddr);

	return true;
}

static void
attach_cb(void *cb_ctx, const struct spdk_nvme_transport_id *trid,
	  struct spdk_nvme_ctrlr *ctrlr, const struct spdk_nvme_ctrlr_opts *opts)
{
	int nsid, num_ns;
	struct ctrlr_entry *entry;
	struct spdk_nvme_ns *ns;
	const struct spdk_nvme_ctrlr_data *cdata = spdk_nvme_ctrlr_get_data(ctrlr);

	entry = calloc(1, sizeof(struct ctrlr_entry));
	if (entry == NULL) {
		perror("ctrlr_entry malloc");
		exit(1);
	}

	printf("Attached to %s\n", trid->traddr);

	snprintf(entry->name, sizeof(entry->name), "%-20.20s (%-20.20s)", cdata->mn, cdata->sn);

	entry->ctrlr = ctrlr;
	entry->next = g_controllers;
	g_controllers = entry;

	/*
	 * Each controller has one or more namespaces.  An NVMe namespace is basically
	 *  equivalent to a SCSI LUN.  The controller's IDENTIFY data tells us how
	 *  many namespaces exist on the controller.  For Intel(R) P3X00 controllers,
	 *  it will just be one namespace.
	 *
	 * Note that in NVMe, namespace IDs start at 1, not 0.
	 */
	num_ns = spdk_nvme_ctrlr_get_num_ns(ctrlr);
	printf("Using controller %s with %d namespaces.\n", entry->name, num_ns);
	for (nsid = 1; nsid <= num_ns; nsid++) {
		ns = spdk_nvme_ctrlr_get_ns(ctrlr, nsid);
		if (ns == NULL) {
			continue;
		}
		register_ns(ctrlr, ns, nsid, entry->name);
	}
}

static void
cleanup(void)
{
	struct ns_entry *ns_entry = g_namespaces;
	struct ctrlr_entry *ctrlr_entry = g_controllers;

	while (ns_entry) {
		struct ns_entry *next = ns_entry->next;
		free(ns_entry);
		ns_entry = next;
	}

	while (ctrlr_entry) {
		struct ctrlr_entry *next = ctrlr_entry->next;

		spdk_nvme_detach(ctrlr_entry->ctrlr);
		free(ctrlr_entry);
		ctrlr_entry = next;
	}
}

int main(int argc, char **argv)
{
	int rc;
	int i;
	struct spdk_env_opts opts;

	struct ns_entry *ns_entry;
	pthread_t	threads[5];
	/*
	 * SPDK relies on an abstraction around the local environment
	 * named env that handles memory allocation and PCI device operations.
	 * This library must be initialized first.
	 *
	 */
	spdk_env_opts_init(&opts);
	opts.name = "hello_world";
	opts.shm_id = 0;
	spdk_env_init(&opts);
	shm_init();

	printf("Initializing NVMe Controllers\n");

	/*
	 * Start the SPDK NVMe enumeration process.  probe_cb will be called
	 *  for each NVMe controller found, giving our application a choice on
	 *  whether to attach to each controller.  attach_cb will then be
	 *  called for each controller after the SPDK NVMe driver has completed
	 *  initializing the controller we chose to attach.
	 */
	rc = spdk_nvme_probe(NULL, NULL, probe_cb, attach_cb, NULL);
	if (rc != 0) {
		fprintf(stderr, "spdk_nvme_probe() failed\n");
		cleanup();
		return 1;
	}

	if (g_controllers == NULL) {
		fprintf(stderr, "no NVMe controllers found\n");
		cleanup();
		return 1;
	}

	printf("Initialization complete.\n");
	i = 1;
	// ns_entry = g_namespaces;
	// hello_world();
	for (i = 1; i < 2; i++) {
		pthread_create(&threads[i], NULL, (void *) test_io_func1, (void *)&i);
		//test_io_func1(&i);
	}
	//for (i = 0; i < 5; i++) {
	//	pthread_join(&threads[i], NULL);
	//}
	cleanup();
	return 0;
}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Regarding NVMe driver command queue depth.
@ 2017-11-17 20:18 Sreeni Busam
  0 siblings, 0 replies; 9+ messages in thread
From: Sreeni Busam @ 2017-11-17 20:18 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5987 bytes --]

Hi Paul,

It is correct. It is working fine.

Thanks,
Sreeni

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Friday, November 17, 2017 11:02 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Cool, so working as you would expect now?

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Friday, November 17, 2017 11:52 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Hi Jim,

Thanks for taking a look at the problem. It is a logical bug. I fixed it.
The memory for the qpair was allocated outside the ns_entry loop.
I have two SSD devices and the memory was not allocated for the qpair for second ns_entry.
Here is the problem.
ns_entry = g_namespaces;
     ns_entry->qpair_2 = spdk_nvme_ctrlr_alloc_io_qpair(ns_entry->ctrlr, NULL, 0);
     if (ns_entry->qpair_2 == NULL) {
           printf("The qpair allocation failed.\n");
           exit (0);
     }
     while (ns_entry != NULL) {
           cnt = 1;
           while (cnt) {
           // Fails to be successfully submit
                rc = stellus_spdk_nvme_ns_cmd_write(ns_entry, ns_entry->qpair_2, sequence->buf,
                                     0, /* LBA start */
                                     1, /* number of LBAs */
                                     io_complete, sequence, 0)
           ….


Sreeni

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
Sent: Friday, November 17, 2017 8:44 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Hi Sreeni,

Can you step through your second call to spdk_nvme_ctrlr_alloc_io_qpair?  The callstack clearly shows that qpair=0x0 was passed into stellus_spdk_nvme_ns_cmd_write() at frame #3.  So I think we should back up and figure out why no I/O qpair was allocated (or maybe it was allocated but not saved in a structure or something).

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of "Sreeni (Sreenivasa) Busam (Stellus)" <s.busam(a)stellus.com<mailto:s.busam(a)stellus.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, November 16, 2017 at 4:57 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

I have been trying to test the number of commands that can be given to the device at a time. I verified that a maximum of 254 commands could be issued for a qpair. So I created a 2nd qpair for ns_entry and issued the I/O commands, it was failing in the first command itself. Is it invalid to create 2 qpair for the same ns_entry and send command to device? The qpair is successfully created, but I could not submit command.
I modified the hello_world program to test this and attached the related code.
Please take a look and let me know what is the problem.

0x000000000040bae2 in nvme_allocate_request (qpair=0x0,
    payload=0x7fffa4726ba0, payload_size=512, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270) at nvme.c:85
#1  0x000000000040996c in _nvme_ns_cmd_rw (ns=0x100ff8ee40, qpair=0x0,
    payload=0x7fffa4726ba0, payload_offset=0, md_offset=0, lba=0, lba_count=1,
    cb_fn=0x4041a4 <write_complete>, cb_arg=0x7b4270, opc=1, io_flags=0,
    apptag_mask=0, apptag=0, check_sgl=true) at nvme_ns_cmd.c:440
#2  0x0000000000409fea in spdk_nvme_ns_cmd_write (ns=0x100ff8ee40, qpair=0x0,
    buffer=0x10000f7000, lba=0, lba_count=1, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270, io_flags=0) at nvme_ns_cmd.c:649
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
#4  0x00000000004046b8 in test_io_func1 () at iostat.c:342
#5  0x0000000000404a94 in main (argc=1, argv=0x7fffa4726db8) at iostat.c:503
(gdb) f 3
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
233                     rc = spdk_nvme_ns_cmd_write(ns_entry->ns, qpair, buffer,
(gdb) p qpair
$1 = (struct spdk_nvme_qpair *) 0x0

If any of you get time, please look at it. Thank you for your suggestion.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 11:26 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] Regarding NVMe driver command queue depth.

Hi Paul,

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.
“The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128”
When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?
Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?
Please give some detail about the parameter.

Thanks,
Sreeni

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 19289 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Regarding NVMe driver command queue depth.
@ 2017-11-17 19:02 Luse, Paul E
  0 siblings, 0 replies; 9+ messages in thread
From: Luse, Paul E @ 2017-11-17 19:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5644 bytes --]

Cool, so working as you would expect now?

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Friday, November 17, 2017 11:52 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Hi Jim,

Thanks for taking a look at the problem. It is a logical bug. I fixed it.
The memory for the qpair was allocated outside the ns_entry loop.
I have two SSD devices and the memory was not allocated for the qpair for second ns_entry.
Here is the problem.
ns_entry = g_namespaces;
     ns_entry->qpair_2 = spdk_nvme_ctrlr_alloc_io_qpair(ns_entry->ctrlr, NULL, 0);
     if (ns_entry->qpair_2 == NULL) {
           printf("The qpair allocation failed.\n");
           exit (0);
     }
     while (ns_entry != NULL) {
           cnt = 1;
           while (cnt) {
           // Fails to be successfully submit
                rc = stellus_spdk_nvme_ns_cmd_write(ns_entry, ns_entry->qpair_2, sequence->buf,
                                     0, /* LBA start */
                                     1, /* number of LBAs */
                                     io_complete, sequence, 0)
           ….


Sreeni

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
Sent: Friday, November 17, 2017 8:44 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Hi Sreeni,

Can you step through your second call to spdk_nvme_ctrlr_alloc_io_qpair?  The callstack clearly shows that qpair=0x0 was passed into stellus_spdk_nvme_ns_cmd_write() at frame #3.  So I think we should back up and figure out why no I/O qpair was allocated (or maybe it was allocated but not saved in a structure or something).

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of "Sreeni (Sreenivasa) Busam (Stellus)" <s.busam(a)stellus.com<mailto:s.busam(a)stellus.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, November 16, 2017 at 4:57 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

I have been trying to test the number of commands that can be given to the device at a time. I verified that a maximum of 254 commands could be issued for a qpair. So I created a 2nd qpair for ns_entry and issued the I/O commands, it was failing in the first command itself. Is it invalid to create 2 qpair for the same ns_entry and send command to device? The qpair is successfully created, but I could not submit command.
I modified the hello_world program to test this and attached the related code.
Please take a look and let me know what is the problem.

0x000000000040bae2 in nvme_allocate_request (qpair=0x0,
    payload=0x7fffa4726ba0, payload_size=512, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270) at nvme.c:85
#1  0x000000000040996c in _nvme_ns_cmd_rw (ns=0x100ff8ee40, qpair=0x0,
    payload=0x7fffa4726ba0, payload_offset=0, md_offset=0, lba=0, lba_count=1,
    cb_fn=0x4041a4 <write_complete>, cb_arg=0x7b4270, opc=1, io_flags=0,
    apptag_mask=0, apptag=0, check_sgl=true) at nvme_ns_cmd.c:440
#2  0x0000000000409fea in spdk_nvme_ns_cmd_write (ns=0x100ff8ee40, qpair=0x0,
    buffer=0x10000f7000, lba=0, lba_count=1, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270, io_flags=0) at nvme_ns_cmd.c:649
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
#4  0x00000000004046b8 in test_io_func1 () at iostat.c:342
#5  0x0000000000404a94 in main (argc=1, argv=0x7fffa4726db8) at iostat.c:503
(gdb) f 3
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
233                     rc = spdk_nvme_ns_cmd_write(ns_entry->ns, qpair, buffer,
(gdb) p qpair
$1 = (struct spdk_nvme_qpair *) 0x0

If any of you get time, please look at it. Thank you for your suggestion.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 11:26 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] Regarding NVMe driver command queue depth.

Hi Paul,

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.
“The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128”
When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?
Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?
Please give some detail about the parameter.

Thanks,
Sreeni

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 17879 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Regarding NVMe driver command queue depth.
@ 2017-11-17 18:52 Sreeni Busam
  0 siblings, 0 replies; 9+ messages in thread
From: Sreeni Busam @ 2017-11-17 18:52 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5302 bytes --]

Hi Jim,

Thanks for taking a look at the problem. It is a logical bug. I fixed it.
The memory for the qpair was allocated outside the ns_entry loop.
I have two SSD devices and the memory was not allocated for the qpair for second ns_entry.
Here is the problem.
ns_entry = g_namespaces;
     ns_entry->qpair_2 = spdk_nvme_ctrlr_alloc_io_qpair(ns_entry->ctrlr, NULL, 0);
     if (ns_entry->qpair_2 == NULL) {
           printf("The qpair allocation failed.\n");
           exit (0);
     }
     while (ns_entry != NULL) {
           cnt = 1;
           while (cnt) {
           // Fails to be successfully submit
                rc = stellus_spdk_nvme_ns_cmd_write(ns_entry, ns_entry->qpair_2, sequence->buf,
                                     0, /* LBA start */
                                     1, /* number of LBAs */
                                     io_complete, sequence, 0)
           ….


Sreeni

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
Sent: Friday, November 17, 2017 8:44 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Hi Sreeni,

Can you step through your second call to spdk_nvme_ctrlr_alloc_io_qpair?  The callstack clearly shows that qpair=0x0 was passed into stellus_spdk_nvme_ns_cmd_write() at frame #3.  So I think we should back up and figure out why no I/O qpair was allocated (or maybe it was allocated but not saved in a structure or something).

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of "Sreeni (Sreenivasa) Busam (Stellus)" <s.busam(a)stellus.com<mailto:s.busam(a)stellus.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, November 16, 2017 at 4:57 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

I have been trying to test the number of commands that can be given to the device at a time. I verified that a maximum of 254 commands could be issued for a qpair. So I created a 2nd qpair for ns_entry and issued the I/O commands, it was failing in the first command itself. Is it invalid to create 2 qpair for the same ns_entry and send command to device? The qpair is successfully created, but I could not submit command.
I modified the hello_world program to test this and attached the related code.
Please take a look and let me know what is the problem.

0x000000000040bae2 in nvme_allocate_request (qpair=0x0,
    payload=0x7fffa4726ba0, payload_size=512, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270) at nvme.c:85
#1  0x000000000040996c in _nvme_ns_cmd_rw (ns=0x100ff8ee40, qpair=0x0,
    payload=0x7fffa4726ba0, payload_offset=0, md_offset=0, lba=0, lba_count=1,
    cb_fn=0x4041a4 <write_complete>, cb_arg=0x7b4270, opc=1, io_flags=0,
    apptag_mask=0, apptag=0, check_sgl=true) at nvme_ns_cmd.c:440
#2  0x0000000000409fea in spdk_nvme_ns_cmd_write (ns=0x100ff8ee40, qpair=0x0,
    buffer=0x10000f7000, lba=0, lba_count=1, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270, io_flags=0) at nvme_ns_cmd.c:649
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
#4  0x00000000004046b8 in test_io_func1 () at iostat.c:342
#5  0x0000000000404a94 in main (argc=1, argv=0x7fffa4726db8) at iostat.c:503
(gdb) f 3
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
233                     rc = spdk_nvme_ns_cmd_write(ns_entry->ns, qpair, buffer,
(gdb) p qpair
$1 = (struct spdk_nvme_qpair *) 0x0

If any of you get time, please look at it. Thank you for your suggestion.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 11:26 AM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] Regarding NVMe driver command queue depth.

Hi Paul,

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.
“The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128”
When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?
Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?
Please give some detail about the parameter.

Thanks,
Sreeni

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 17119 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Regarding NVMe driver command queue depth.
@ 2017-11-17 16:44 Harris, James R
  0 siblings, 0 replies; 9+ messages in thread
From: Harris, James R @ 2017-11-17 16:44 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3966 bytes --]

Hi Sreeni,

Can you step through your second call to spdk_nvme_ctrlr_alloc_io_qpair?  The callstack clearly shows that qpair=0x0 was passed into stellus_spdk_nvme_ns_cmd_write() at frame #3.  So I think we should back up and figure out why no I/O qpair was allocated (or maybe it was allocated but not saved in a structure or something).

-Jim


From: SPDK <spdk-bounces(a)lists.01.org> on behalf of "Sreeni (Sreenivasa) Busam (Stellus)" <s.busam(a)stellus.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, November 16, 2017 at 4:57 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

I have been trying to test the number of commands that can be given to the device at a time. I verified that a maximum of 254 commands could be issued for a qpair. So I created a 2nd qpair for ns_entry and issued the I/O commands, it was failing in the first command itself. Is it invalid to create 2 qpair for the same ns_entry and send command to device? The qpair is successfully created, but I could not submit command.
I modified the hello_world program to test this and attached the related code.
Please take a look and let me know what is the problem.

0x000000000040bae2 in nvme_allocate_request (qpair=0x0,
    payload=0x7fffa4726ba0, payload_size=512, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270) at nvme.c:85
#1  0x000000000040996c in _nvme_ns_cmd_rw (ns=0x100ff8ee40, qpair=0x0,
    payload=0x7fffa4726ba0, payload_offset=0, md_offset=0, lba=0, lba_count=1,
    cb_fn=0x4041a4 <write_complete>, cb_arg=0x7b4270, opc=1, io_flags=0,
    apptag_mask=0, apptag=0, check_sgl=true) at nvme_ns_cmd.c:440
#2  0x0000000000409fea in spdk_nvme_ns_cmd_write (ns=0x100ff8ee40, qpair=0x0,
    buffer=0x10000f7000, lba=0, lba_count=1, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270, io_flags=0) at nvme_ns_cmd.c:649
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
#4  0x00000000004046b8 in test_io_func1 () at iostat.c:342
#5  0x0000000000404a94 in main (argc=1, argv=0x7fffa4726db8) at iostat.c:503
(gdb) f 3
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
233                     rc = spdk_nvme_ns_cmd_write(ns_entry->ns, qpair, buffer,
(gdb) p qpair
$1 = (struct spdk_nvme_qpair *) 0x0

If any of you get time, please look at it. Thank you for your suggestion.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 11:26 AM
To: spdk(a)lists.01.org
Subject: [SPDK] Regarding NVMe driver command queue depth.

Hi Paul,

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.
“The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128”
When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?
Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?
Please give some detail about the parameter.

Thanks,
Sreeni

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 11720 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Regarding NVMe driver command queue depth.
@ 2017-11-17  0:59 Sreeni Busam
  0 siblings, 0 replies; 9+ messages in thread
From: Sreeni Busam @ 2017-11-17  0:59 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2789 bytes --]

Thanks Paul.
I got the idea what happens for queue size.
Let me take a look at the code and test.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Thursday, November 16, 2017 4:05 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

Hi Sreeni,

So in NVMe the queues are SW constructs that can be made pretty much any size as long as they are smaller than what the HW reports as its max via CAP.MQES.  In the PSDK NVMe driver you can see how the value is determined in this function:

void
nvme_ctrlr_init_cap(struct spdk_nvme_ctrlr *ctrlr, const union spdk_nvme_cap_register *cap)
{
              ctrlr->cap = *cap;

              ctrlr->min_page_size = 1u << (12 + ctrlr->cap.bits.mpsmin);

              /* For now, always select page_size == min_page_size. */
              ctrlr->page_size = ctrlr->min_page_size;

              ctrlr->opts.io_queue_size = spdk_max(ctrlr->opts.io_queue_size, SPDK_NVME_IO_QUEUE_MIN_ENTRIES);
              ctrlr->opts.io_queue_size = spdk_min(ctrlr->opts.io_queue_size, ctrlr->cap.bits.mqes + 1u);

              ctrlr->opts.io_queue_requests = spdk_max(ctrlr->opts.io_queue_requests, ctrlr->opts.io_queue_size);
}

So you can control the number via the options structure, struct spdk_nvme_ctrlr_opts, passed in when you probe for devices.  So think of it as the size of the submission queue that you create as limited by HW.

Does that make sense?

Thx
Paul


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 12:26 PM
To: spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>
Subject: [SPDK] Regarding NVMe driver command queue depth.

Hi Paul,

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.
"The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128"
When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?
Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?
Please give some detail about the parameter.

Thanks,
Sreeni

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8568 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Regarding NVMe driver command queue depth.
@ 2017-11-17  0:04 Luse, Paul E
  0 siblings, 0 replies; 9+ messages in thread
From: Luse, Paul E @ 2017-11-17  0:04 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2411 bytes --]

Hi Sreeni,

So in NVMe the queues are SW constructs that can be made pretty much any size as long as they are smaller than what the HW reports as its max via CAP.MQES.  In the PSDK NVMe driver you can see how the value is determined in this function:

void
nvme_ctrlr_init_cap(struct spdk_nvme_ctrlr *ctrlr, const union spdk_nvme_cap_register *cap)
{
              ctrlr->cap = *cap;

              ctrlr->min_page_size = 1u << (12 + ctrlr->cap.bits.mpsmin);

              /* For now, always select page_size == min_page_size. */
              ctrlr->page_size = ctrlr->min_page_size;

              ctrlr->opts.io_queue_size = spdk_max(ctrlr->opts.io_queue_size, SPDK_NVME_IO_QUEUE_MIN_ENTRIES);
              ctrlr->opts.io_queue_size = spdk_min(ctrlr->opts.io_queue_size, ctrlr->cap.bits.mqes + 1u);

              ctrlr->opts.io_queue_requests = spdk_max(ctrlr->opts.io_queue_requests, ctrlr->opts.io_queue_size);
}

So you can control the number via the options structure, struct spdk_nvme_ctrlr_opts, passed in when you probe for devices.  So think of it as the size of the submission queue that you create as limited by HW.

Does that make sense?

Thx
Paul


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 12:26 PM
To: spdk(a)lists.01.org
Subject: [SPDK] Regarding NVMe driver command queue depth.

Hi Paul,

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.
"The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128"
When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?
Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?
Please give some detail about the parameter.

Thanks,
Sreeni

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 7371 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Regarding NVMe driver command queue depth.
@ 2017-11-16 23:57 Sreeni Busam
  0 siblings, 0 replies; 9+ messages in thread
From: Sreeni Busam @ 2017-11-16 23:57 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3248 bytes --]

I have been trying to test the number of commands that can be given to the device at a time. I verified that a maximum of 254 commands could be issued for a qpair. So I created a 2nd qpair for ns_entry and issued the I/O commands, it was failing in the first command itself. Is it invalid to create 2 qpair for the same ns_entry and send command to device? The qpair is successfully created, but I could not submit command.
I modified the hello_world program to test this and attached the related code.
Please take a look and let me know what is the problem.

0x000000000040bae2 in nvme_allocate_request (qpair=0x0,
    payload=0x7fffa4726ba0, payload_size=512, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270) at nvme.c:85
#1  0x000000000040996c in _nvme_ns_cmd_rw (ns=0x100ff8ee40, qpair=0x0,
    payload=0x7fffa4726ba0, payload_offset=0, md_offset=0, lba=0, lba_count=1,
    cb_fn=0x4041a4 <write_complete>, cb_arg=0x7b4270, opc=1, io_flags=0,
    apptag_mask=0, apptag=0, check_sgl=true) at nvme_ns_cmd.c:440
#2  0x0000000000409fea in spdk_nvme_ns_cmd_write (ns=0x100ff8ee40, qpair=0x0,
    buffer=0x10000f7000, lba=0, lba_count=1, cb_fn=0x4041a4 <write_complete>,
    cb_arg=0x7b4270, io_flags=0) at nvme_ns_cmd.c:649
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
#4  0x00000000004046b8 in test_io_func1 () at iostat.c:342
#5  0x0000000000404a94 in main (argc=1, argv=0x7fffa4726db8) at iostat.c:503
(gdb) f 3
#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,
    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,
    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233
233                     rc = spdk_nvme_ns_cmd_write(ns_entry->ns, qpair, buffer,
(gdb) p qpair
$1 = (struct spdk_nvme_qpair *) 0x0

If any of you get time, please look at it. Thank you for your suggestion.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 11:26 AM
To: spdk(a)lists.01.org
Subject: [SPDK] Regarding NVMe driver command queue depth.

Hi Paul,

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.
"The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128"
When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?
Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?
Please give some detail about the parameter.

Thanks,
Sreeni

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 9248 bytes --]

[-- Attachment #3: hello_test_prog_v2.obj --]
[-- Type: application/octet-stream, Size: 4049 bytes --]

modified ns_entry

struct ns_entry {
	struct spdk_nvme_ctrlr	*ctrlr;
	struct spdk_nvme_ns	*ns;
	struct ns_entry		*next;
	struct spdk_nvme_qpair	*qpair;
	struct spdk_nvme_qpair	*qpair_2;
}
modified program
while (ns_entry != NULL) {
		/*
		 * Allocate an I/O qpair that we can use to submit read/write requests
		 *  to namespaces on the controller.  NVMe controllers typically support
		 *  many qpairs per controller.  Any I/O qpair allocated for a controller
		 *  can submit I/O to any namespace on that controller.
		 *
		 * The SPDK NVMe driver provides no synchronization for qpair accesses -
		 *  the application must ensure only a single thread submits I/O to a
		 *  qpair, and that same thread must also check for completions on that
		 *  qpair.  This enables extremely efficient I/O processing by making all
		 *  I/O operations completely lockless.
		 */
		if (ns_entry->qpair_allocated == 0) {
			ns_entry->qpair = spdk_nvme_ctrlr_alloc_io_qpair(ns_entry->ctrlr, NULL, 0);
			if (ns_entry->qpair == NULL) {
				printf("ERROR: spdk_nvme_ctrlr_alloc_io_qpair() failed\n");
				return;
			}
		}

/*
		 * Use spdk_dma_zmalloc to allocate a 4KB zeroed buffer.  This memory
		 * will be pinned, which is required for data buffers used for SPDK NVMe
		 * I/O operations.
		 */
		sequence = calloc(1, sizeof(struct io_sequence));
			sequence->buf = spdk_dma_zmalloc(0x1000, 0x1000, NULL);
			sequence->is_completed = 0;
			sequence->ns_entry = ns_entry;
			ns_entry->qpair_allocated = 1;

		sequence1 = calloc(1, sizeof(struct io_sequence));
			sequence1->buf = spdk_dma_zmalloc(0x1000, 0x1000, NULL);
			sequence1->is_completed = 0;
			sequence1->ns_entry = ns_entry;
		/*
		 * Print "Hello world!" to sequence.buf.  We will write this data to LBA
		 *  0 on the namespace, and then later read it back into a separate buffer
		 *  to demonstrate the full I/O path.
		 */
		snprintf(sequence->buf, 0x1000, "%s", "Hello world!\n");

		/*
		 * Write the data buffer to LBA 0 of this namespace.  "write_complete" and
		 *  "&sequence" are specified as the completion callback function and
		 *  argument respectively.  write_complete() will be called with the
		 *  value of &sequence as a parameter when the write I/O is completed.
		 *  This allows users to potentially specify different completion
		 *  callback routines for each I/O, as well as pass a unique handle
		 *  as an argument so the application knows which I/O has completed.
		 *
		 * Note that the SPDK NVMe driver will only check for completions
		 *  when the application calls spdk_nvme_qpair_process_completions().
		 *  It is the responsibility of the application to trigger the polling
		 *  process.
		 */
		cnt = 127;
		while (cnt) {
			rc = stellus_spdk_nvme_ns_cmd_write(ns_entry, ns_entry->qpair, sequence->buf,
						    0, /* LBA start */
						    1, /* number of LBAs */
						    io_complete, sequence, 0);
			if (rc != 0) {
				fprintf(stderr, "starting write I/O failed\n");
				exit(1);
			}
			rc = stellus_spdk_nvme_ns_cmd_read(ns_entry, ns_entry->qpair, sequence1->buf,
						    0, /* LBA start */
						    1, /* number of LBAs */
						    io_complete, sequence1, 0);
			if (rc != 0) {
				fprintf(stderr, "starting read I/O failed\n");
				exit(1);
			}
			cnt--;
		}
		spdk_nvme_qpair_process_completions(ns_entry->qpair, 0);
		ns_entry = ns_entry->next;
	}
	ns_entry = g_namespaces;
	ns_entry->qpair_2 = spdk_nvme_ctrlr_alloc_io_qpair(ns_entry->ctrlr, NULL, 0);
	if (ns_entry->qpair_2 == NULL) {
		printf("The qpair allocation failed.\n");
		exit (0);
	}
	while (ns_entry != NULL) {
		cnt = 1;
		while (cnt) {
		// Fails to be successfully submit
			rc = stellus_spdk_nvme_ns_cmd_write(ns_entry, ns_entry->qpair_2, sequence->buf,
						    0, /* LBA start */
						    1, /* number of LBAs */
						    io_complete, sequence, 0);
			if (rc != 0) {
				fprintf(stderr, "starting write I/O failed\n");
				exit(1);
			}
			...
		}
	}




^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-11-20 20:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-16 19:25 [SPDK] Regarding NVMe driver command queue depth Sreeni Busam
2017-11-16 23:57 Sreeni Busam
2017-11-17  0:04 Luse, Paul E
2017-11-17  0:59 Sreeni Busam
2017-11-17 16:44 Harris, James R
2017-11-17 18:52 Sreeni Busam
2017-11-17 19:02 Luse, Paul E
2017-11-17 20:18 Sreeni Busam
2017-11-20 20:50 Sreeni Busam

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.