From: Maxime Chevallier <maxime.chevallier@bootlin.com>
To: davem@davemloft.net
Cc: Maxime Chevallier <maxime.chevallier@bootlin.com>,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
Antoine Tenart <antoine.tenart@bootlin.com>,
thomas.petazzoni@bootlin.com, gregory.clement@bootlin.com,
miquel.raynal@bootlin.com, nadavh@marvell.com,
stefanc@marvell.com, ymarkman@marvell.com, mw@semihalf.com
Subject: [PATCH net-next 2/2] net: mvpp2: use round-robin scheduling for TX queues on the same CPU
Date: Mon, 24 Sep 2018 11:11:06 +0200 [thread overview]
Message-ID: <20180924091106.15094-3-maxime.chevallier@bootlin.com> (raw)
In-Reply-To: <20180924091106.15094-1-maxime.chevallier@bootlin.com>
This commit allows each TXQ to be picked in a round-robin fashion by
the PPv2 transmit scheduling mechanism. This is opposed to the default
behaviour that prioritizes the highest numbered queues.
Suggested-by: Yan Markman <ymarkman@marvell.com>
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 1 +
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 3 +++
2 files changed, 4 insertions(+)
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
index f5dceef60b0e..176c6b56fdcc 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
@@ -331,6 +331,7 @@
#define MVPP2_TXP_SCHED_ENQ_MASK 0xff
#define MVPP2_TXP_SCHED_DISQ_OFFSET 8
#define MVPP2_TXP_SCHED_CMD_1_REG 0x8010
+#define MVPP2_TXP_SCHED_FIXED_PRIO_REG 0x8014
#define MVPP2_TXP_SCHED_PERIOD_REG 0x8018
#define MVPP2_TXP_SCHED_MTU_REG 0x801c
#define MVPP2_TXP_MTU_MAX 0x7FFFF
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index bdacb9577216..c2ed71788e4f 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -1448,6 +1448,9 @@ static void mvpp2_defaults_set(struct mvpp2_port *port)
tx_port_num);
mvpp2_write(port->priv, MVPP2_TXP_SCHED_CMD_1_REG, 0);
+ /* Set TXQ scheduling to Round-Robin */
+ mvpp2_write(port->priv, MVPP2_TXP_SCHED_FIXED_PRIO_REG, 0);
+
/* Close bandwidth for all queues */
for (queue = 0; queue < MVPP2_MAX_TXQ; queue++) {
ptxq = mvpp2_txq_phys(port->id, queue);
--
2.11.0
next prev parent reply other threads:[~2018-09-24 9:11 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-24 9:11 [PATCH net-next 0/2] net: mvpp2: Add txq to CPU mapping Maxime Chevallier
2018-09-24 9:11 ` [PATCH net-next 1/2] net: mvpp2: support XPS by mapping TX queues to CPUs Maxime Chevallier
2018-09-24 9:11 ` Maxime Chevallier [this message]
2018-09-24 17:01 ` [PATCH net-next 0/2] net: mvpp2: Add txq to CPU mapping David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180924091106.15094-3-maxime.chevallier@bootlin.com \
--to=maxime.chevallier@bootlin.com \
--cc=antoine.tenart@bootlin.com \
--cc=davem@davemloft.net \
--cc=gregory.clement@bootlin.com \
--cc=linux-kernel@vger.kernel.org \
--cc=miquel.raynal@bootlin.com \
--cc=mw@semihalf.com \
--cc=nadavh@marvell.com \
--cc=netdev@vger.kernel.org \
--cc=stefanc@marvell.com \
--cc=thomas.petazzoni@bootlin.com \
--cc=ymarkman@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).