All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wei Liu <wei.liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: netdev@vger.kernel.org, konrad.wilk@oracle.com,
	Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [PATCH net-next 1/3] xen-netback: page pool support
Date: Fri, 24 May 2013 14:25:46 +0100	[thread overview]
Message-ID: <20130524132546.GC16745__9550.58774347908$1369402363$gmane$org@zion.uk.xensource.com> (raw)
In-Reply-To: <519F60AF.5020705@citrix.com>

On Fri, May 24, 2013 at 01:44:31PM +0100, David Vrabel wrote:
> On 24/05/13 11:32, Wei Liu wrote:
> > This patch implements a page pool for all vifs. It has two functionalities:
> >  a) to limit the amount of pages used by all vifs
> >  b) to track pages belong to vifs
> 
> This adds a global spin lock.  This doesn't seem very scalable.
> 

Well we already have a bunch of spin locks in Linux's page allocator.
This spin lock protects a very small critical section which looks quite
acceptable to me.

> It's also not clear how this is usefully limiting the memory usage by
> guest network traffic.  It limits the number of pages that netback can
> use during the grant copy from the guest pages but this is only short
> time compared to the lifetime of the network packet within the rest of
> the network stack.
> 

Please consider we might have some sort of mapping mechanism in the
future, that's when page pool becomes able to actually limit number of
pages used by vifs.

> If you didn't have this page pool stuff then each thread/VIF is limited
> to at most 256 pages anyway and I think 1 MiB of memory per VIF is
> perfectly acceptable.
> 

Please note that 256 is only the current status, we might need to
tune this number in the future.

I would like to have more input on this.


Wei.

> David

  parent reply	other threads:[~2013-05-24 13:25 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-24 10:32 [PATCH net-next 0/3] xen-netback: switch to NAPI + kthread 1:1 model Wei Liu
2013-05-24 10:32 ` [PATCH net-next 1/3] xen-netback: page pool support Wei Liu
2013-05-24 10:32 ` Wei Liu
2013-05-24 12:44   ` David Vrabel
2013-05-24 12:44   ` [Xen-devel] " David Vrabel
2013-05-24 13:25     ` Wei Liu
2013-05-24 13:25     ` Wei Liu [this message]
2013-05-24 13:34     ` Wei Liu
2013-05-24 13:34     ` [Xen-devel] " Wei Liu
2013-05-24 10:32 ` [PATCH net-next 2/3] xen-netback: switch to per-cpu scratch space Wei Liu
2013-05-24 10:32 ` Wei Liu
2013-05-24 12:24   ` David Vrabel
2013-05-24 12:24   ` [Xen-devel] " David Vrabel
2013-05-24 12:31     ` Wei Liu
2013-05-24 12:31     ` Wei Liu
2013-05-24 10:32 ` [PATCH net-next 3/3] xen-netback: switch to NAPI + kthread 1:1 model Wei Liu
2013-05-24 10:32 ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='20130524132546.GC16745__9550.58774347908$1369402363$gmane$org@zion.uk.xensource.com' \
    --to=wei.liu2@citrix.com \
    --cc=david.vrabel@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=konrad.wilk@oracle.com \
    --cc=netdev@vger.kernel.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.