From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754238AbcEPRO0 (ORCPT ); Mon, 16 May 2016 13:14:26 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:36724 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753661AbcEPROY (ORCPT ); Mon, 16 May 2016 13:14:24 -0400 Subject: Re: [Intel-wired-lan] [PATCH] ixgbe: take online CPU number as MQ max limit when alloc_etherdev_mq() To: Alexander Duyck , ethan zhao References: <1463118995-31763-1-git-send-email-ethan.zhao@oracle.com> <573937A7.9010405@oracle.com> Cc: "linux-kernel@vger.kernel.org" , intel-wired-lan , Netdev , Ethan Zhao From: John Fastabend Message-ID: <5739FFDC.80908@gmail.com> Date: Mon, 16 May 2016 10:14:04 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [...] >>> ixgbe_main.c. All you are doing with this patch is denying the user >>> choice with this change as they then are not allowed to set more >> >> Yes, it is purposed to deny configuration that doesn't benefit. > > Doesn't benefit who? It is obvious you don't understand how DCB is > meant to work since you are assuming the queues are throw-away. > Anyone who makes use of the ability to prioritize their traffic would > likely have a different opinion. +1 this is actually needed so that when DCB is turned on we can see both prioritize between tcs (DCB feature) but also do not see a performance degradation with just a single TC transmitting. If we break this (and its happened occasionally) we end up with bug reports so its clear to me folks care about it. > >>> queues. Even if they find your decision was wrong for their >>> configuration. >>> >>> - Alex >>> >> Thanks, >> Ethan > > Your response clearly points out you don't understand DCB. I suggest > you take another look at how things are actually being configured. I > believe what you will find is that the current implementation is > basing things on the number of online CPUs already based on the > ring_feature[RING_F_RSS].limit value. All that is happening is that > you are getting that value multiplied by the number of TCs and the RSS > value is reduced if the result is greater than 64 based on the maximum > number of queues. > > With your code on an 8 core system you go from being able to perform > RSS over 8 queues to only being able to perform RSS over 1 queue when > you enable DCB. There was a bug a long time ago where this actually > didn't provide any gain because the interrupt allocation was binding > all 8 RSS queues to a single q_vector, but that has long since been > fixed and what you should be seeing is that RSS will spread traffic > across either 8 or 16 queues when DCB is enabled in either 8 or 4 TC > mode. > > My advice would be to use a netperf TCP_CRR test and watch what queues > and what interrupts traffic is being delivered to. Then if you have > DCB enabled on both ends you might try changing the priority of your > netperf session and watch what happens when you switch between TCs. > What you should find is that you will shift between groups of queues > and as you do so you should not have any active queues overlapping > unless you have less interrupts than CPUs. > Yep. Thanks, John > - Alex > _______________________________________________ > Intel-wired-lan mailing list > Intel-wired-lan@lists.osuosl.org > http://lists.osuosl.org/mailman/listinfo/intel-wired-lan >