linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yan Markman <ymarkman@marvell.com>
To: Antoine Tenart <antoine.tenart@bootlin.com>
Cc: "davem@davemloft.net" <davem@davemloft.net>,
	"linux@armlinux.org.uk" <linux@armlinux.org.uk>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"thomas.petazzoni@bootlin.com" <thomas.petazzoni@bootlin.com>,
	"maxime.chevallier@bootlin.com" <maxime.chevallier@bootlin.com>,
	"gregory.clement@bootlin.com" <gregory.clement@bootlin.com>,
	"miquel.raynal@bootlin.com" <miquel.raynal@bootlin.com>,
	Nadav Haklai <nadavh@marvell.com>,
	Stefan Chulski <stefanc@marvell.com>,
	"mw@semihalf.com" <mw@semihalf.com>
Subject: RE: [EXT] [PATCH net-next 07/15] net: mvpp2: fix the computation of the RXQs
Date: Thu, 28 Feb 2019 15:53:27 +0000	[thread overview]
Message-ID: <DM5PR18MB213475B02326F74B5E239A57D6750@DM5PR18MB2134.namprd18.prod.outlook.com> (raw)
In-Reply-To: <20190228155059.GH4359@kwain>

OK-OK. The " It does fix PPv2.1 support, which was broken" is great reason!

-----Original Message-----
From: Antoine Tenart <antoine.tenart@bootlin.com> 
Sent: Thursday, February 28, 2019 5:51 PM
To: Yan Markman <ymarkman@marvell.com>
Cc: Antoine Tenart <antoine.tenart@bootlin.com>; davem@davemloft.net; linux@armlinux.org.uk; netdev@vger.kernel.org; linux-kernel@vger.kernel.org; thomas.petazzoni@bootlin.com; maxime.chevallier@bootlin.com; gregory.clement@bootlin.com; miquel.raynal@bootlin.com; Nadav Haklai <nadavh@marvell.com>; Stefan Chulski <stefanc@marvell.com>; mw@semihalf.com
Subject: Re: [EXT] [PATCH net-next 07/15] net: mvpp2: fix the computation of the RXQs

Yan,

On Thu, Feb 28, 2019 at 03:40:50PM +0000, Yan Markman wrote:
> 
> Regarding the MVPP2_DEFAULT_RXQ
> Seems the current variant is flexible, permitting easy customize the 
> configuration according to customer's needs.
> 
> Regarding the Queue in probe():
> Looking into old code there where no2 queue-modes but 3:
> enum mv_pp2_queue_distribution_mode {
> 	MVPP2_QDIST_SINGLE_MODE,
> 	MVPP2_QDIST_MULTI_MODE,
> 	MVPP2_SINGLE_RESOURCE_MODE
> };
> The current   if(MVPP2_QDIST_MULTI_MODE)else is correct also for the
> MVPP2_SINGLE_RESOURCE_MODE, but new/patched isn't.

There are only 2 modes supported in the upstream kernel:
MVPP2_QDIST_SINGLE_MODE and MVPP2_QDIST_MULTI_MODE. The third one you mentioned is only supported in out-of-tree kernels.

Therefore patches sent to the upstream kernel do not take in into account, as it is not supported.

> Since this patch doesn't change any functionality (right now) but 
> reduces the flexibility I do not see real reason to apply it.

This patch do not break the upstream support of PPv2 and does improve two thing:

- It limits the total number of RXQs being allocated, to ensure the
  number of RXQs being used do not exceed the number of available RXQ
  (which would make the driver to fail).
- It does fix PPv2.1 support, which was broken.

I do think the patch will benefit the upstream PPv2 support.

Thanks,
Antoine

> The patch fixes the computation of RXQs being used by the PPv2 driver, 
> which is set depending on the PPv2 engine version and the queue mode 
> used. There are three cases:
> 
> - PPv2.1: 1 RXQ per CPU.
> - PPV2.2 with MVPP2_QDIST_MULTI_MODE: 1 RXQ per CPU.
> - PPv2.2 with MVPP2_QDIST_SINGLE_MODE: 1 RXQ is shared between the CPUs.
> 
> The PPv2 engine supports a maximum of 32 queues per port. This patch 
> adds a check so that we do not overstep this maximum.
> 
> It appeared the calculation was broken for PPv2.1 engines since 
> f8c6ba8424b0, as PPv2.1 ports ended up with a single RXQ while they 
> needed 4. This patch fixes it.
> 
> Fixes: f8c6ba8424b0 ("net: mvpp2: use only one rx queue per port per 
> CPU")
> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
> ---
>  drivers/net/ethernet/marvell/mvpp2/mvpp2.h    |  4 ++--
>  .../net/ethernet/marvell/mvpp2/mvpp2_main.c   | 23 ++++++++++++-------
>  2 files changed, 17 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h 
> b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> index 17ff330cce5f..687e011de5ef 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> @@ -549,8 +549,8 @@
>  #define MVPP2_MAX_TSO_SEGS		300
>  #define MVPP2_MAX_SKB_DESCS		(MVPP2_MAX_TSO_SEGS * 2 + MAX_SKB_FRAGS)
>  
> -/* Default number of RXQs in use */
> -#define MVPP2_DEFAULT_RXQ		1
> +/* Max number of RXQs per port */
> +#define MVPP2_PORT_MAX_RXQ		32
>  
>  /* Max number of Rx descriptors */
>  #define MVPP2_MAX_RXD_MAX		1024
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c 
> b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> index 24cee6cbe309..9c6200a59910 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> @@ -4062,8 +4062,8 @@ static int mvpp2_multi_queue_vectors_init(struct mvpp2_port *port,
>  			snprintf(irqname, sizeof(irqname), "hif%d", i);
>  
>  		if (queue_mode == MVPP2_QDIST_MULTI_MODE) {
> -			v->first_rxq = i * MVPP2_DEFAULT_RXQ;
> -			v->nrxqs = MVPP2_DEFAULT_RXQ;
> +			v->first_rxq = i;
> +			v->nrxqs = 1;
>  		} else if (queue_mode == MVPP2_QDIST_SINGLE_MODE &&
>  			   i == (port->nqvecs - 1)) {
>  			v->first_rxq = 0;
> @@ -4156,8 +4156,7 @@ static int mvpp2_port_init(struct mvpp2_port *port)
>  	    MVPP2_MAX_PORTS * priv->max_port_rxqs)
>  		return -EINVAL;
>  
> -	if (port->nrxqs % MVPP2_DEFAULT_RXQ ||
> -	    port->nrxqs > priv->max_port_rxqs || port->ntxqs > MVPP2_MAX_TXQ)
> +	if (port->nrxqs > priv->max_port_rxqs || port->ntxqs > 
> +MVPP2_MAX_TXQ)
>  		return -EINVAL;
>  
>  	/* Disable port */
> @@ -4778,10 +4777,18 @@ static int mvpp2_port_probe(struct platform_device *pdev,
>  	}
>  
>  	ntxqs = MVPP2_MAX_TXQ;
> -	if (priv->hw_version == MVPP22 && queue_mode == MVPP2_QDIST_MULTI_MODE)
> -		nrxqs = MVPP2_DEFAULT_RXQ * num_possible_cpus();
> -	else
> -		nrxqs = MVPP2_DEFAULT_RXQ;
> +	if (priv->hw_version == MVPP22 && queue_mode == MVPP2_QDIST_SINGLE_MODE) {
> +		nrxqs = 1;
> +	} else {
> +		/* According to the PPv2.2 datasheet and our experiments on
> +		 * PPv2.1, RX queues have an allocation granularity of 4 (when
> +		 * more than a single one on PPv2.2).
> +		 * Round up to nearest multiple of 4.
> +		 */
> +		nrxqs = (num_possible_cpus() + 3) & ~0x3;
> +		if (nrxqs > MVPP2_PORT_MAX_RXQ)
> +			nrxqs = MVPP2_PORT_MAX_RXQ;
> +	}
>  
>  	dev = alloc_etherdev_mqs(sizeof(*port), ntxqs, nrxqs);
>  	if (!dev)
> --
> 2.20.1
> 

--
Antoine Ténart, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

  reply	other threads:[~2019-02-28 15:53 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-28 13:21 [PATCH net-next 00/15] net: mvpp2: fixes and improvements Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 01/15] net: mvpp2: fix a typo in the header Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 02/15] net: mvpp2: update the port documentation regarding the GoP Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 03/15] net: mvpp2: fix alignment of MVPP2_GMAC_CONFIG_MII_SPEED definition Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 04/15] net: mvpp2: a port can be disabled even if we use the link IRQ Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 05/15] net: mvpp2: reconfiguring the port interface is PPv2.2 specific Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 06/15] net: mvpp2: fix validate for PPv2.1 Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 07/15] net: mvpp2: fix the computation of the RXQs Antoine Tenart
2019-02-28 15:40   ` [EXT] " Yan Markman
2019-02-28 15:50     ` Antoine Tenart
2019-02-28 15:53       ` Yan Markman [this message]
2019-02-28 13:21 ` [PATCH net-next 08/15] net: mvpp2: some AN fields require the link to be down when updated Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 09/15] net: mvpp2: always disable both MACs when disabling a port Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 10/15] net: mvpp2: only update the XLG configuration when needed Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 11/15] net: mvpp2: force the XLG MAC link up or down when not using in-band Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 12/15] net: mvpp2: rework the XLG MAC reset handling Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 13/15] net: mvpp2: reset the MACs when reconfiguring a port Antoine Tenart
2019-02-28 13:21 ` [PATCH net-next 14/15] net: mvpp2: set the XPCS and MPCS in reset when not used Antoine Tenart
2019-02-28 18:40   ` David Miller
2019-02-28 21:23     ` Antoine Tenart
2019-02-28 21:45       ` David Miller
2019-02-28 13:21 ` [PATCH net-next 15/15] net: mvpp2: set the GMAC, XLG MAC, XPCS and MPCS in reset when a port is down Antoine Tenart
2019-02-28 15:00   ` [EXT] " Yan Markman
2019-02-28 15:06     ` Antoine Tenart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM5PR18MB213475B02326F74B5E239A57D6750@DM5PR18MB2134.namprd18.prod.outlook.com \
    --to=ymarkman@marvell.com \
    --cc=antoine.tenart@bootlin.com \
    --cc=davem@davemloft.net \
    --cc=gregory.clement@bootlin.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=maxime.chevallier@bootlin.com \
    --cc=miquel.raynal@bootlin.com \
    --cc=mw@semihalf.com \
    --cc=nadavh@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=stefanc@marvell.com \
    --cc=thomas.petazzoni@bootlin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).