All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>,
	"hch@lst.de" <hch@lst.de>
Cc: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: [RFC PATCH 0/2] nvmet: add polling support
Date: Thu, 12 Dec 2019 12:32:00 -0800	[thread overview]
Message-ID: <ed3638c6-7506-4ac6-a2ab-df432b2111b6@grimberg.me> (raw)
In-Reply-To: <BYAPR04MB57495A09DE5E7652E2B38AAF86550@BYAPR04MB5749.namprd04.prod.outlook.com>


>> percpu threads per namespace? Sounds like the wrong approach. These
>> threads will compete for cpu time with the main nvmet contexts.
>>
> That make sense, how about a global threadpool for target which can be
> shared between all the subsystem and their name-spaces just like we
> have buffered_io_wq?
>> Have you considered having the main nvmet contexts incorporate polling
>> activity between I/Os? Don't have a great dea on how to do it from first
>> thought...
>>
> 
> I am not able to understand nvmet context, can you please elaborate ?
> Are you referring to the pattern we one have in the
> nvme_execute_rq_polled() ?

No, we would want non-selective polling. Right now we have nvmet context
starting from the transport going to submit I/O to the backend, or
starting from the backend going to submit to the transport.

Ideally, we'd have these contexts to do the polling instead of a 
different thread that will poll for as much as it can taking away
cpu time?

One way to do it is to place a intermediate thread that will sit between
the transport and the backend but that would yield an additional context
switch in the I/O path (not ideal).

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2019-12-12 20:32 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-10  6:25 [RFC PATCH 0/2] nvmet: add polling support Chaitanya Kulkarni
2019-12-10  6:25 ` [RFC PATCH 1/2] nvmet: add bdev-ns " Chaitanya Kulkarni
2020-01-20 12:52   ` Max Gurtovoy
2020-01-21 19:22     ` Chaitanya Kulkarni
2020-01-23 14:23       ` Max Gurtovoy
2020-01-30 18:19         ` Chaitanya Kulkarni
2019-12-10  6:25 ` [RFC PATCH 2/2] nvmet: add file-ns " Chaitanya Kulkarni
2019-12-12  1:01 ` [RFC PATCH 0/2] nvmet: add " Sagi Grimberg
2019-12-12  5:44   ` Chaitanya Kulkarni
2019-12-12 20:32     ` Sagi Grimberg [this message]
2020-01-20  5:13       ` Chaitanya Kulkarni
2020-01-20  4:48   ` Chaitanya Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ed3638c6-7506-4ac6-a2ab-df432b2111b6@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=Chaitanya.Kulkarni@wdc.com \
    --cc=hch@lst.de \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.