From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wei Liu Subject: Re: [PATCH net-next 1/3] xen-netback: page pool support Date: Fri, 24 May 2013 14:25:46 +0100 Message-ID: <20130524132546.GC16745__9550.58774347908$1369402363$gmane$org@zion.uk.xensource.com> References: <1369391553-16835-1-git-send-email-wei.liu2@citrix.com> <1369391553-16835-2-git-send-email-wei.liu2@citrix.com> <519F60AF.5020705@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <519F60AF.5020705@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: David Vrabel Cc: netdev@vger.kernel.org, konrad.wilk@oracle.com, Wei Liu , ian.campbell@citrix.com, xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org On Fri, May 24, 2013 at 01:44:31PM +0100, David Vrabel wrote: > On 24/05/13 11:32, Wei Liu wrote: > > This patch implements a page pool for all vifs. It has two functionalities: > > a) to limit the amount of pages used by all vifs > > b) to track pages belong to vifs > > This adds a global spin lock. This doesn't seem very scalable. > Well we already have a bunch of spin locks in Linux's page allocator. This spin lock protects a very small critical section which looks quite acceptable to me. > It's also not clear how this is usefully limiting the memory usage by > guest network traffic. It limits the number of pages that netback can > use during the grant copy from the guest pages but this is only short > time compared to the lifetime of the network packet within the rest of > the network stack. > Please consider we might have some sort of mapping mechanism in the future, that's when page pool becomes able to actually limit number of pages used by vifs. > If you didn't have this page pool stuff then each thread/VIF is limited > to at most 256 pages anyway and I think 1 MiB of memory per VIF is > perfectly acceptable. > Please note that 256 is only the current status, we might need to tune this number in the future. I would like to have more input on this. Wei. > David