All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shreyas Bhatewara <sbhatewara@vmware.com>
To: Ben Hutchings <bhutchings@solarflare.com>,
	Stephen Hemminger <shemminger@vyatta.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"pv-drivers@vmware.com" <pv-drivers@vmware.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: RE: [PATCH 2.6.35-rc6] net-next: Add multiqueue support to vmxnet3 driver
Date: Thu, 14 Oct 2010 16:31:35 -0700	[thread overview]
Message-ID: <89E2752CFA8EC044846EB8499819134102BF0F3B1D@EXCH-MBX-4.vmware.com> (raw)
In-Reply-To: <1287073868.2258.13.camel@achroite.uk.solarflarecom.com>

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 4450 bytes --]



> -----Original Message-----
> From: Ben Hutchings [mailto:bhutchings@solarflare.com]
> Sent: Thursday, October 14, 2010 9:31 AM
> To: Stephen Hemminger; Shreyas Bhatewara
> Cc: netdev@vger.kernel.org; pv-drivers@vmware.com; linux-
> kernel@vger.kernel.org
> Subject: Re: [PATCH 2.6.35-rc6] net-next: Add multiqueue support to
> vmxnet3 driver
> 
> On Wed, 2010-10-13 at 14:57 -0700, Stephen Hemminger wrote:
> > On Wed, 13 Oct 2010 14:47:05 -0700 (PDT)
> > Shreyas Bhatewara <sbhatewara@vmware.com> wrote:
> >
> > > #ifdef VMXNET3_RSS
> > > +static unsigned int num_rss_entries;
> > > +#define VMXNET3_MAX_DEVICES 10
> > > +
> > > +static int rss_ind_table[VMXNET3_MAX_DEVICES *
> > > +			 VMXNET3_RSS_IND_TABLE_SIZE + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES * VMXNET3_RSS_IND_TABLE_SIZE] = -1 };
> > > +#endif
> > > +static int num_tqs[VMXNET3_MAX_DEVICES + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES] = 1 };
> > > +static int num_rqs[VMXNET3_MAX_DEVICES + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES] = 1 };
> > > +static int share_tx_intr[VMXNET3_MAX_DEVICES + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES] = 0 };
> > > +static int buddy_intr[VMXNET3_MAX_DEVICES + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES] = 1 };
> > > +
> > > +static unsigned int num_adapters;
> > > +module_param_array(share_tx_intr, int, &num_adapters, 0400);
> > > +MODULE_PARM_DESC(share_tx_intr, "Share one IRQ among all tx queue
> completions. "
> > > +		 "Comma separated list of 1s and 0s - one for each NIC. "
> > > +		 "1 to share, 0 to not, default is 0");
> > > +module_param_array(buddy_intr, int, &num_adapters, 0400);
> > > +MODULE_PARM_DESC(buddy_intr, "Share one IRQ among corresponding tx
> and rx "
> > > +		 "queues. Comma separated list of 1s and 0s - one for each
> "
> > > +		 "NIC. 1 to share, 0 to not, default is 1");
> > > +module_param_array(num_tqs, int, &num_adapters, 0400);
> > > +MODULE_PARM_DESC(num_tqs, "Number of transmit queues in each
> adapter. Comma "
> > > +		 "separated list of integers. Setting this to 0 makes
> number"
> > > +		 " of queues same as number of CPUs. Default is 1.");
> > > +
> > > +#ifdef VMXNET3_RSS
> > > +module_param_array(rss_ind_table, int, &num_rss_entries, 0400);
> > > +MODULE_PARM_DESC(rss_ind_table, "RSS Indirection table. Number of
> entries "
> > > +		 "per NIC should be 32. Each integer in a comma separated
> list"
> > > +		 " is an rx queue number starting with 0. Repeat the same
> for"
> > > +		 " all NICs.");
> > > +module_param_array(num_rqs, int, &num_adapters, 0400);
> > > +MODULE_PARM_DESC(num_rqs, "Number of receive queues in each
> adapter. Comma "
> > > +		 " separated list of integers. Setting this to 0 makes
> number"
> > > +		 " of queues same as number of CPUs. Default is 1.");
> >
> > Module parameters are not right for this. They lead to different API
> > for interacting with each driver vendor. Is there a another better
> API?
> > Does it have to be this tweakable in a production environment.
> 
> The ethtool commands ETHTOOL_{G,S}RXFHINDIR cover the RSS indirection
> table.  These are new in 2.6.36 but already supported in the ethtool
> utility.

Thanks Ben,

Good to know. I will try and replace the module parameter for RSS indirection table with handlers for these ethtool commands.


> 
> As for numbers of queues and association of their completions with
> interrupts, we currently have nothing except ETHTOOL_GRXRINGS to get
> the
> number of RX queues.  I did post a tentative definition of an ethtool
> interface for this in
> <http://article.gmane.org/gmane.linux.network/172386> though it
> wouldn't
> provide quite as much control as these module parameters.  It is also
> significantly more difficult to support changing numbers of queues
> after
> an interface has been created, and I have not yet attempted to
> implement
> the 'set' command myself.


Okay. It would be best to keep module parameters to dictate number of queues till ethtool commands to do so become available/easy to use (command to change number of tx queues do not exist).

Regards.
Shreyas


> 
> Ben.
> 
> --
> Ben Hutchings, Senior Software Engineer, Solarflare Communications
> Not speaking for my employer; that's the marketing department's job.
> They asked us to note that Solarflare product names are trademarked.

ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

WARNING: multiple messages have this Message-ID (diff)
From: Shreyas Bhatewara <sbhatewara@vmware.com>
To: Ben Hutchings <bhutchings@solarflare.com>,
	Stephen Hemminger <shemminger@vyatta.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"pv-drivers@vmware.com" <pv-drivers@vmware.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: RE: [PATCH 2.6.35-rc6] net-next: Add multiqueue support to vmxnet3 driver
Date: Thu, 14 Oct 2010 16:31:35 -0700	[thread overview]
Message-ID: <89E2752CFA8EC044846EB8499819134102BF0F3B1D@EXCH-MBX-4.vmware.com> (raw)
In-Reply-To: <1287073868.2258.13.camel@achroite.uk.solarflarecom.com>



> -----Original Message-----
> From: Ben Hutchings [mailto:bhutchings@solarflare.com]
> Sent: Thursday, October 14, 2010 9:31 AM
> To: Stephen Hemminger; Shreyas Bhatewara
> Cc: netdev@vger.kernel.org; pv-drivers@vmware.com; linux-
> kernel@vger.kernel.org
> Subject: Re: [PATCH 2.6.35-rc6] net-next: Add multiqueue support to
> vmxnet3 driver
> 
> On Wed, 2010-10-13 at 14:57 -0700, Stephen Hemminger wrote:
> > On Wed, 13 Oct 2010 14:47:05 -0700 (PDT)
> > Shreyas Bhatewara <sbhatewara@vmware.com> wrote:
> >
> > > #ifdef VMXNET3_RSS
> > > +static unsigned int num_rss_entries;
> > > +#define VMXNET3_MAX_DEVICES 10
> > > +
> > > +static int rss_ind_table[VMXNET3_MAX_DEVICES *
> > > +			 VMXNET3_RSS_IND_TABLE_SIZE + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES * VMXNET3_RSS_IND_TABLE_SIZE] = -1 };
> > > +#endif
> > > +static int num_tqs[VMXNET3_MAX_DEVICES + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES] = 1 };
> > > +static int num_rqs[VMXNET3_MAX_DEVICES + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES] = 1 };
> > > +static int share_tx_intr[VMXNET3_MAX_DEVICES + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES] = 0 };
> > > +static int buddy_intr[VMXNET3_MAX_DEVICES + 1] = {
> > > +	[0 ... VMXNET3_MAX_DEVICES] = 1 };
> > > +
> > > +static unsigned int num_adapters;
> > > +module_param_array(share_tx_intr, int, &num_adapters, 0400);
> > > +MODULE_PARM_DESC(share_tx_intr, "Share one IRQ among all tx queue
> completions. "
> > > +		 "Comma separated list of 1s and 0s - one for each NIC. "
> > > +		 "1 to share, 0 to not, default is 0");
> > > +module_param_array(buddy_intr, int, &num_adapters, 0400);
> > > +MODULE_PARM_DESC(buddy_intr, "Share one IRQ among corresponding tx
> and rx "
> > > +		 "queues. Comma separated list of 1s and 0s - one for each
> "
> > > +		 "NIC. 1 to share, 0 to not, default is 1");
> > > +module_param_array(num_tqs, int, &num_adapters, 0400);
> > > +MODULE_PARM_DESC(num_tqs, "Number of transmit queues in each
> adapter. Comma "
> > > +		 "separated list of integers. Setting this to 0 makes
> number"
> > > +		 " of queues same as number of CPUs. Default is 1.");
> > > +
> > > +#ifdef VMXNET3_RSS
> > > +module_param_array(rss_ind_table, int, &num_rss_entries, 0400);
> > > +MODULE_PARM_DESC(rss_ind_table, "RSS Indirection table. Number of
> entries "
> > > +		 "per NIC should be 32. Each integer in a comma separated
> list"
> > > +		 " is an rx queue number starting with 0. Repeat the same
> for"
> > > +		 " all NICs.");
> > > +module_param_array(num_rqs, int, &num_adapters, 0400);
> > > +MODULE_PARM_DESC(num_rqs, "Number of receive queues in each
> adapter. Comma "
> > > +		 " separated list of integers. Setting this to 0 makes
> number"
> > > +		 " of queues same as number of CPUs. Default is 1.");
> >
> > Module parameters are not right for this. They lead to different API
> > for interacting with each driver vendor. Is there a another better
> API?
> > Does it have to be this tweakable in a production environment.
> 
> The ethtool commands ETHTOOL_{G,S}RXFHINDIR cover the RSS indirection
> table.  These are new in 2.6.36 but already supported in the ethtool
> utility.

Thanks Ben,

Good to know. I will try and replace the module parameter for RSS indirection table with handlers for these ethtool commands.


> 
> As for numbers of queues and association of their completions with
> interrupts, we currently have nothing except ETHTOOL_GRXRINGS to get
> the
> number of RX queues.  I did post a tentative definition of an ethtool
> interface for this in
> <http://article.gmane.org/gmane.linux.network/172386> though it
> wouldn't
> provide quite as much control as these module parameters.  It is also
> significantly more difficult to support changing numbers of queues
> after
> an interface has been created, and I have not yet attempted to
> implement
> the 'set' command myself.


Okay. It would be best to keep module parameters to dictate number of queues till ethtool commands to do so become available/easy to use (command to change number of tx queues do not exist).

Regards.
Shreyas


> 
> Ben.
> 
> --
> Ben Hutchings, Senior Software Engineer, Solarflare Communications
> Not speaking for my employer; that's the marketing department's job.
> They asked us to note that Solarflare product names are trademarked.


  reply	other threads:[~2010-10-14 23:31 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <alpine.LRH.2.00.1009290104130.464@sbhatewara-dev1.eng.vmware.com>
2010-10-13 21:47 ` [PATCH 2.6.35-rc6] net-next: Add multiqueue support to vmxnet3 driver Shreyas Bhatewara
2010-10-13 21:57   ` Stephen Hemminger
2010-10-13 22:26     ` Shreyas Bhatewara
2010-10-14 16:31     ` Ben Hutchings
2010-10-14 23:31       ` Shreyas Bhatewara [this message]
2010-10-14 23:31         ` Shreyas Bhatewara
2010-10-15 16:23         ` David Miller
2010-11-01 22:42           ` [PATCH 2.6.35-rc8] net-next: Add multiqueue support to vmxnet3 v2driver Shreyas Bhatewara
2010-11-10 22:37             ` [PATCH 2.6.36-rc8] " Shreyas Bhatewara
2010-11-17  5:14               ` [PATCH 2.6.37-rc1] net-next: Add multiqueue support to vmxnet3 driver v3 Shreyas Bhatewara
2010-11-17 17:23                 ` Ben Hutchings
2010-11-17 17:27                   ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=89E2752CFA8EC044846EB8499819134102BF0F3B1D@EXCH-MBX-4.vmware.com \
    --to=sbhatewara@vmware.com \
    --cc=bhutchings@solarflare.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pv-drivers@vmware.com \
    --cc=shemminger@vyatta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.