From: Hillf Danton <hdanton@sina.com>
To: Rakesh Pillai <pillair@codeaurora.org>
Cc: Andrew Lunn <andrew@lunn.ch>, Hillf Danton <hdanton@sina.com>,
evgreen@chromium.org, Eric Dumazet <eric.dumazet@gmail.com>,
netdev@vger.kernel.org, linux-wireless@vger.kernel.org,
linux-kernel@vger.kernel.org,
Sebastian Gottschall <s.gottschall@dd-wrt.com>,
dianders@chromium.org, David Laight <David.Laight@ACULAB.COM>,
Markus Elfring <Markus.Elfring@web.de>,
ath10k@lists.infradead.org, kuba@kernel.org,
johannes@sipsolutions.net, davem@davemloft.net,
kvalo@codeaurora.org, Felix Fietkau <nbd@nbd.name>
Subject: RE: [RFC 0/7] Add support to process rx packets in thread
Date: Wed, 29 Jul 2020 09:34:25 +0800 [thread overview]
Message-ID: <20200729013425.13740-1-hdanton@sina.com> (raw)
In-Reply-To: <001001d66500$69a58970$3cf09c50$@codeaurora.org>
On Tue, 28 Jul 2020 22:29:02 +0530 Rakesh Pillai wrote:
> > -----Original Message-----
> > From: David Laight <David.Laight@ACULAB.COM>
> > Sent: Sunday, July 26, 2020 4:46 PM
> > To: 'Sebastian Gottschall' <s.gottschall@dd-wrt.com>; Hillf Danton
> > <hdanton@sina.com>
> > Cc: Andrew Lunn <andrew@lunn.ch>; Rakesh Pillai =
> <pillair@codeaurora.org>;
> > netdev@vger.kernel.org; linux-wireless@vger.kernel.org; linux-
> > kernel@vger.kernel.org; ath10k@lists.infradead.org;
> > dianders@chromium.org; Markus Elfring <Markus.Elfring@web.de>;
> > evgreen@chromium.org; kuba@kernel.org; johannes@sipsolutions.net;
> > davem@davemloft.net; kvalo@codeaurora.org
> > Subject: RE: [RFC 0/7] Add support to process rx packets in thread
> >=20
> > From: Sebastian Gottschall <s.gottschall@dd-wrt.com>
> > > Sent: 25 July 2020 16:42
> > > >> i agree. i just can say that i tested this patch recently due =
> this
> > > >> discussion here. and it can be changed by sysfs. but it doesnt =
> work for
> > > >> wifi drivers which are mainly using dummy netdev devices. for =
> this i
> > > >> made a small patch to get them working using napi_set_threaded
> > manually
> > > >> hardcoded in the drivers. (see patch bellow)
> >=20
> > > > By CONFIG_THREADED_NAPI, there is no need to consider what you did
> > here
> > > > in the napi core because device drivers know better and are =
> responsible
> > > > for it before calling napi_schedule(n).
> >=20
> > > yeah. but that approach will not work for some cases. some stupid
> > > drivers are using locking context in the napi poll function.
> > > in that case the performance will runto shit. i discovered this with =
> the
> > > mvneta eth driver (marvell) and mt76 tx polling (rx works)
> > > for mvneta is will cause very high latencies and packet drops. for =
> mt76
> > > it causes packet stop. doesnt work simply (on all cases no crashes)
> > > so the threading will only work for drivers which are compatible =
> with
> > > that approach. it cannot be used as drop in replacement from my =
> point of
> > > view.
> > > its all a question of the driver design
> >=20
> > Why should it make (much) difference whether the napi callbacks (etc)
> > are done in the context of the interrupted process or that of a
> > specific kernel thread.
> > The process flags (or whatever) can even be set so that it appears
> > to be the expected 'softint' context.
> >=20
> > In any case running NAPI from a thread will just show up the next
> > piece of code that runs for ages in softint context.
> > I think I've seen the tail end of memory being freed under rcu
> > finally happening under softint and taking absolutely ages.
> >=20
> > David
> >=20
>
> Hi All,
>
> Is the threaded NAPI change posted to kernel ?
https://lore.kernel.org/netdev/20200726163119.86162-1-nbd@nbd.name/
https://lore.kernel.org/netdev/20200727123239.4921-1-nbd@nbd.name/
> Is the conclusion of this discussion that " we cannot use threads for
> processing packets " ??
That isn't it if any conclusion reached. Hard to answer your question
TBH, and OTOH I'm wondering in which context device driver developer
prefers to handle tx/rx, IRQ or BH or user context on available idle
CPUs, what is preventing them from doing that? Is it likely making
ant-antenna-size sense to set the napi::weight to 3 and turn to 30
kworkers for processing the ten-minute packet flood hitting the hardware
for instance on a system with 32 CPU cores or more?
_______________________________________________
ath10k mailing list
ath10k@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/ath10k
next prev parent reply other threads:[~2020-07-29 1:34 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-21 17:14 [RFC 0/7] Add support to process rx packets in thread Rakesh Pillai
2020-07-21 17:14 ` [RFC 1/7] mac80211: Add check for napi handle before WARN_ON Rakesh Pillai
2020-07-22 12:56 ` Johannes Berg
2020-07-23 18:26 ` Rakesh Pillai
2020-07-23 20:06 ` Johannes Berg
2020-07-24 6:21 ` Rakesh Pillai
2020-07-26 16:19 ` Rakesh Pillai
2020-07-30 12:40 ` Johannes Berg
2020-07-21 17:14 ` [RFC 2/7] ath10k: Add support to process rx packet in thread Rakesh Pillai
2020-07-21 21:53 ` Rajkumar Manoharan
2020-07-22 12:27 ` Felix Fietkau
2020-07-22 12:55 ` Johannes Berg
2020-07-22 13:00 ` Felix Fietkau
2020-07-23 6:09 ` Rajkumar Manoharan
2021-03-22 23:57 ` Ben Greear
2021-03-23 1:20 ` Brian Norris
2021-03-23 3:01 ` Ben Greear
2021-03-23 7:45 ` Felix Fietkau
2021-03-25 9:45 ` Rakesh Pillai
2021-03-25 10:33 ` Felix Fietkau
2020-07-23 18:25 ` Rakesh Pillai
2020-07-24 23:11 ` Jacob Keller
2020-07-21 17:14 ` [RFC 3/7] ath10k: Add module param to enable rx thread Rakesh Pillai
2020-07-21 17:14 ` [RFC 4/7] ath10k: Do not exhaust budget on process tx completion Rakesh Pillai
2020-07-21 17:14 ` [RFC 5/7] ath10k: Handle the rx packet processing in thread Rakesh Pillai
2020-07-21 17:14 ` [RFC 6/7] ath10k: Add deliver to stack from thread context Rakesh Pillai
2020-07-21 17:14 ` [RFC 7/7] ath10k: Handle rx thread suspend and resume Rakesh Pillai
2020-07-23 23:06 ` Sebastian Gottschall
2020-07-24 6:19 ` Rakesh Pillai
2020-07-21 17:25 ` [RFC 0/7] Add support to process rx packets in thread Andrew Lunn
2020-07-21 18:05 ` Florian Fainelli
2020-07-23 18:21 ` Rakesh Pillai
2020-07-23 19:02 ` Florian Fainelli
2020-07-24 6:20 ` Rakesh Pillai
2020-07-24 22:28 ` Florian Fainelli
2020-07-22 9:12 ` David Laight
2020-07-25 8:16 ` Hillf Danton
2020-07-25 10:38 ` Sebastian Gottschall
2020-07-25 12:25 ` Hillf Danton
2020-07-25 14:08 ` Sebastian Gottschall
2020-07-25 14:57 ` Hillf Danton
2020-07-25 15:41 ` Sebastian Gottschall
2020-07-26 11:16 ` David Laight
2020-07-28 16:59 ` Rakesh Pillai
2020-07-29 1:34 ` Hillf Danton [this message]
2020-07-25 17:57 ` Felix Fietkau
2020-07-26 1:22 ` Hillf Danton
2020-07-26 8:10 ` Felix Fietkau
2020-07-26 8:32 ` Hillf Danton
2020-07-26 8:59 ` Felix Fietkau
2020-07-22 16:20 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200729013425.13740-1-hdanton@sina.com \
--to=hdanton@sina.com \
--cc=David.Laight@ACULAB.COM \
--cc=Markus.Elfring@web.de \
--cc=andrew@lunn.ch \
--cc=ath10k@lists.infradead.org \
--cc=davem@davemloft.net \
--cc=dianders@chromium.org \
--cc=eric.dumazet@gmail.com \
--cc=evgreen@chromium.org \
--cc=johannes@sipsolutions.net \
--cc=kuba@kernel.org \
--cc=kvalo@codeaurora.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-wireless@vger.kernel.org \
--cc=nbd@nbd.name \
--cc=netdev@vger.kernel.org \
--cc=pillair@codeaurora.org \
--cc=s.gottschall@dd-wrt.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).