linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: "Wunderlich, Mark" <mark.wunderlich@intel.com>,
	"'linux-nvme@lists.infradead.org'"
	<linux-nvme@lists.infradead.org>
Subject: Re: [PATCH v2 rfc 1/2] nvmet-tcp: time based stop condition in io_work
Date: Thu, 26 Sep 2019 13:10:20 -0700	[thread overview]
Message-ID: <0695bebb-968f-36f2-c850-4974c100cbd7@grimberg.me> (raw)
In-Reply-To: <B33B37937B7F3D4CB878107E305D4916D37CA2@ORSMSX104.amr.corp.intel.com>



On 9/26/19 9:38 AM, Wunderlich, Mark wrote:
> Move to a do/while loop terminate condition in
> nvmet_tcp_io_work() that is time based that can be
> applicable for network interface that is in either
> interrupt or polling mode.  If a busy poll time period
> configuration setting (net.core.busy_read) not active
> then assume interrupt mode and default loop period to
> sufficient time period that would have satisfied previous
> default ops count of 64.
> 
> Preserve previous interrupt mode behavior in that do/while
> loop could exit early if at end of an iteration that 'pending'
> is false.
> 
> Outside loop add poll mode interpretation of 'pending',
> resetting it to true if any recv/send activity during
> complete loop time period.
> 
> Re-queue the work item if either mode determines that
> previous activity indicates there may be additional 'pending'
> to be processed.
> 
> Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
> Signed-off-by: Kiran Patil <kiran.patil@intel.com>
> Signed-off-by: Mark Wunderlich <mark.wunderlich@intel.com>
> ---
>   drivers/nvme/target/tcp.c |   29 +++++++++++++++++++++++++----
>   1 file changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index bf4f034..8ec124a 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -21,7 +21,6 @@
>   
>   #define NVMET_TCP_RECV_BUDGET		8
>   #define NVMET_TCP_SEND_BUDGET		8
> -#define NVMET_TCP_IO_WORK_BUDGET	64
>   
>   enum nvmet_tcp_send_state {
>   	NVMET_TCP_SEND_DATA_PDU,
> @@ -1167,6 +1166,15 @@ static void nvmet_tcp_io_work(struct work_struct *w)
>   		container_of(w, struct nvmet_tcp_queue, io_work);
>   	bool pending;
>   	int ret, ops = 0;
> +	unsigned long deadline, bp_usec;
> +	bool interrupt_mode = false;
> +
> +	bp_usec = READ_ONCE(queue->sock->sk->sk_ll_usec);
> +	if (!bp_usec) {
> +		bp_usec = 1000; /* 1 msec default for interrupt mode */
> +		interrupt_mode = true;

Can't say I like the interrupt_mode variable...

> +	}
> +	deadline = jiffies + usecs_to_jiffies(bp_usec);
>   
>   	do {
>   		pending = false;
> @@ -1194,10 +1202,23 @@ static void nvmet_tcp_io_work(struct work_struct *w)
>   			return;
>   		}
>   
> -	} while (pending && ops < NVMET_TCP_IO_WORK_BUDGET);
> +		/* If in interrupt mode, exit loop if this
> +		 * time through there was no activity.
> +		 */
> +		if (interrupt_mode && !pending)
> +			break;

Why can't you break in the adq mode if you don't have any pending work?
This will mean that you don't get an interrupt?

I think we're lacking understanding of how this works, when does the
application get an interrupt and when it does not...

I would like to see some abstraction from the socket interface
that will tell us if we should poll more instead of becoming
aware of the interrupt vs. adq thing...

Overall the performance improvement is impressive with adq, does
this have any impact vs. a non-adq device?

>   
> -	/*
> -	 * We exahusted our budget, requeue our selves
> +	} while (!time_after(jiffies, deadline));
> +
> +	/* For poll mode, pending will be true if there was
> +	 * any completed operations during the complete loop
> +	 * period.
> +	 */
> +	if (!interrupt_mode && ops > 0)
> +		pending = true;
> +
> +	/* We exhausted our budget, requeue if pending indicates
> +	 * potential of more to process in either mode.
>   	 */
>   	if (pending)
>   		queue_work_on(queue->cpu, nvmet_tcp_wq, &queue->io_work);
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2019-09-26 20:10 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-26 16:38 [PATCH v2 rfc 1/2] nvmet-tcp: time based stop condition in io_work Wunderlich, Mark
2019-09-26 20:10 ` Sagi Grimberg [this message]
2019-10-01 17:12   ` Wunderlich, Mark

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0695bebb-968f-36f2-c850-4974c100cbd7@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mark.wunderlich@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).