From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753377AbdKWVtB (ORCPT ); Thu, 23 Nov 2017 16:49:01 -0500 Received: from mga07.intel.com ([134.134.136.100]:15600 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753046AbdKWVs7 (ORCPT ); Thu, 23 Nov 2017 16:48:59 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,443,1505804400"; d="scan'208";a="5125645" Date: Thu, 23 Nov 2017 13:48:57 -0800 From: Solio Sarabia To: netdev@vger.kernel.org, davem@davemloft.net, stephen@networkplumber.org Cc: kys@microsoft.com, shiny.sebastian@intel.com, solio.sarabia@intel.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH] net-sysfs: export gso_max_size attribute Message-ID: <20171123214857.GA41@intel.com> References: <1511397041-27994-1-git-send-email-solio.sarabia@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <1511397041-27994-1-git-send-email-solio.sarabia@intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Nov 22, 2017 at 04:30:41PM -0800, Solio Sarabia wrote: > The netdevice gso_max_size is exposed to allow users fine-control on > systems with multiple NICs with different GSO buffer sizes, and where > the virtual devices like bridge and veth, need to be aware of the GSO > size of the underlying devices. > > In a virtualized environment, setting the right GSO sizes for physical > and virtual devices makes all TSO work to be on physical NIC, improving > throughput and reducing CPU util. If virtual devices send buffers > greater than what NIC supports, it forces host to do TSO for buffers > exceeding the limit, increasing CPU utilization in host. > > Suggested-by: Shiny Sebastian > Signed-off-by: Solio Sarabia > --- > In one test scenario with Hyper-V host, Ubuntu 16.04 VM, with Docker > inside VM, and NTttcp sending 40 Gbps from one container, setting the > right gso_max_size values for all network devices in the chain, reduces > CPU overhead about 3x (for the sender), since all TSO work is done by > physical NIC. > > net/core/net-sysfs.c | 30 ++++++++++++++++++++++++++++++ > 1 file changed, 30 insertions(+) > > diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c > index 799b752..7314bc8 100644 > --- a/net/core/net-sysfs.c > +++ b/net/core/net-sysfs.c > @@ -376,6 +376,35 @@ static ssize_t gro_flush_timeout_store(struct device *dev, > } > NETDEVICE_SHOW_RW(gro_flush_timeout, fmt_ulong); > > +static int change_gso_max_size(struct net_device *dev, unsigned long new_size) > +{ > + unsigned int orig_size = dev->gso_max_size; > + > + if (new_size != (unsigned int)new_size) > + return -ERANGE; > + > + if (new_size == orig_size) > + return 0; > + > + if (new_size <= 0 || new_size > GSO_MAX_SIZE) > + return -ERANGE; > + > + dev->gso_max_size = new_size; > + return 0; > +} Hindsight, we need to re-evaluate the valid range. As it is now, in a virtualized environment, users could set the gso to a value greater than what NICs expose, which would inflict the original issue: overhead in the host os due to a configuration value in the vm. > + > +static ssize_t gso_max_size_store(struct device *dev, > + struct device_attribute *attr, > + const char *buf, size_t len) > +{ > + if (!capable(CAP_NET_ADMIN)) > + return -EPERM; > + > + return netdev_store(dev, attr, buf, len, change_gso_max_size); > +} > + > +NETDEVICE_SHOW_RW(gso_max_size, fmt_dec); > + > static ssize_t ifalias_store(struct device *dev, struct device_attribute *attr, > const char *buf, size_t len) > { > @@ -543,6 +572,7 @@ static struct attribute *net_class_attrs[] __ro_after_init = { > &dev_attr_flags.attr, > &dev_attr_tx_queue_len.attr, > &dev_attr_gro_flush_timeout.attr, > + &dev_attr_gso_max_size.attr, > &dev_attr_phys_port_id.attr, > &dev_attr_phys_port_name.attr, > &dev_attr_phys_switch_id.attr, > -- > 2.7.4 >