linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Christoph Hellwig <hch@infradead.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-block@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	David Runge <dave@sleepmap.de>,
	linux-rt-users@vger.kernel.org, Jens Axboe <axboe@kernel.dk>,
	linux-kernel@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Daniel Wagner <dwagner@suse.de>, Mike Galbraith <efault@gmx.de>
Subject: Re: [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done
Date: Thu, 29 Oct 2020 13:03:26 -0700	[thread overview]
Message-ID: <d2c15411-5b21-535b-6e07-331ebe22f8c8@grimberg.me> (raw)
In-Reply-To: <20201029145743.GA19379@infradead.org>


>>> Well, usb-storage obviously seems to do it, and the block layer
>>> does not prohibit it.
>>
>> Also loop, nvme-tcp and then I stopped looking.
>> Any objections about adding local_bh_disable() around it?
> 
> To me it seems like the whole IPI plus potentially softirq dance is
> a little pointless when completing from process context.

I agree.

> Sagi, any opinion on that from the nvme-tcp POV?

nvme-tcp should (almost) always complete from the context that matches
the rq->mq_ctx->cpu as the thread that processes incoming
completions (per hctx) should be affinitized to match it (unless cpus
come and go).

So for nvme-tcp I don't expect blk_mq_complete_need_ipi to return true
in normal operation. That leaves the teardowns+aborts, which aren't very
interesting here.

I would note that nvme-tcp does not go to sleep after completing every
I/O like how sebastian indicated usb does.

Having said that, today the network stack is calling nvme_tcp_data_ready
in napi context (softirq) which in turn triggers the queue thread to
handle network rx (and complete the I/O). It's been measured recently
that running the rx context directly in softirq will save some
latency (possible because nvme-tcp rx context is non-blocking).

So I'd think that patch #2 is unnecessary and just add overhead for
nvme-tcp.. do note that the napi softirq cpu mapping depends on the RSS
steering, which is unlikely to match rq->mq_ctx->cpu, hence if completed
from napi context, nvme-tcp will probably always go to the IPI path.


  reply	other threads:[~2020-10-29 20:03 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-21 17:50 5.9.1-rt18: issues with Firewire card on AMD hardware David Runge
2020-10-23 11:04 ` [PATCH RFC] blk-mq: Don't IPI requests on PREEMPT_RT Sebastian Andrzej Siewior
2020-10-23 11:21   ` Christoph Hellwig
2020-10-23 13:52     ` Sebastian Andrzej Siewior
2020-10-27  9:26       ` Christoph Hellwig
2020-10-27 10:11         ` Sebastian Andrzej Siewior
2020-10-27 16:07           ` Christoph Hellwig
2020-10-27 17:05             ` Thomas Gleixner
2020-10-27 17:23               ` Christoph Hellwig
2020-10-27 17:59                 ` Sebastian Andrzej Siewior
2020-10-27 20:58                 ` Sebastian Andrzej Siewior
2020-10-28  6:56                   ` Christoph Hellwig
2020-10-28 14:12                     ` [PATCH 1/3] blk-mq: Don't complete on a remote CPU in force threaded mode Sebastian Andrzej Siewior
2020-10-28 14:12                       ` [PATCH 2/3] blk-mq: Always complete remote completions requests in softirq Sebastian Andrzej Siewior
2020-10-28 14:12                       ` [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done Sebastian Andrzej Siewior
2020-10-28 14:44                         ` Christoph Hellwig
2020-10-28 14:47                           ` Sebastian Andrzej Siewior
2020-10-29 13:12                         ` Sebastian Andrzej Siewior
2020-10-29 14:05                           ` Christoph Hellwig
2020-10-29 14:56                             ` Sebastian Andrzej Siewior
2020-10-29 14:57                               ` Christoph Hellwig
2020-10-29 20:03                                 ` Sagi Grimberg [this message]
2020-10-29 21:01                                   ` Sebastian Andrzej Siewior
2020-10-29 21:07                                     ` Sagi Grimberg
2020-10-31 10:41                                       ` Sebastian Andrzej Siewior
2020-10-31 15:00                                         ` Jens Axboe
2020-10-31 15:01                                           ` Jens Axboe
2020-10-31 18:09                                             ` Christoph Hellwig
2020-11-02  9:55                                           ` Sebastian Andrzej Siewior
2020-11-02 18:12                                             ` Christoph Hellwig
2020-11-04 19:15                                               ` Sagi Grimberg
2020-11-06 15:23                                               ` Sebastian Andrzej Siewior
2020-10-28 10:04                 ` [PATCH RFC] blk-mq: Don't IPI requests on PREEMPT_RT Peter Zijlstra
2020-10-26  0:37 ` 5.9.1-rt18: issues with Firewire card on AMD hardware David Runge

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d2c15411-5b21-535b-6e07-331ebe22f8c8@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=axboe@kernel.dk \
    --cc=bigeasy@linutronix.de \
    --cc=dave@sleepmap.de \
    --cc=dwagner@suse.de \
    --cc=efault@gmx.de \
    --cc=hch@infradead.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).