All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Aaron Lu <aaron.lu@intel.com>
Cc: "Saeed Mahameed" <saeedm@mellanox.com>,
	"pstaszewski@itcare.pl" <pstaszewski@itcare.pl>,
	"eric.dumazet@gmail.com" <eric.dumazet@gmail.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Tariq Toukan" <tariqt@mellanox.com>,
	"ilias.apalodimas@linaro.org" <ilias.apalodimas@linaro.org>,
	"yoel@kviknet.dk" <yoel@kviknet.dk>,
	"mgorman@techsingularity.net" <mgorman@techsingularity.net>,
	brouer@redhat.com, "Jérôme Glisse" <jglisse@redhat.com>
Subject: Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
Date: Mon, 5 Nov 2018 10:10:42 +0100	[thread overview]
Message-ID: <20181105101042.269292ae@redhat.com> (raw)
In-Reply-To: <20181105062836.GB4502@intel.com>

On Mon, 5 Nov 2018 14:28:36 +0800
Aaron Lu <aaron.lu@intel.com> wrote:

> On Sat, Nov 03, 2018 at 01:53:25PM +0100, Jesper Dangaard Brouer wrote:
> > 
> > On Fri, 2 Nov 2018 22:20:24 +0800 Aaron Lu <aaron.lu@intel.com> wrote:
> >   
> > > On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:  
> > > > On Fri, 2 Nov 2018 13:23:56 +0800
> > > > Aaron Lu <aaron.lu@intel.com> wrote:
> > > >     
> > > > > On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:    
> > > > > > On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:      
> > > > > > > On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> > > > > > > wrote:
> > > > > > > ... ...      
> > > > > > > > Section copied out:
> > > > > > > > 
> > > > > > > >   mlx5e_poll_tx_cq
> > > > > > > >   |          
> > > > > > > >    --16.34%--napi_consume_skb
> > > > > > > >              |          
> > > > > > > >              |--12.65%--__free_pages_ok
> > > > > > > >              |          |          
> > > > > > > >              |           --11.86%--free_one_page
> > > > > > > >              |                     |          
> > > > > > > >              |                     |--10.10%
> > > > > > > > --queued_spin_lock_slowpath
> > > > > > > >              |                     |          
> > > > > > > >              |                      --0.65%--_raw_spin_lock      
> > > > > > > 
> > > > > > > This callchain looks like it is freeing higher order pages than order
> > > > > > > 0:
> > > > > > > __free_pages_ok is only called for pages whose order are bigger than
> > > > > > > 0.      
> > > > > > 
> > > > > > mlx5 rx uses only order 0 pages, so i don't know where these high order
> > > > > > tx SKBs are coming from..       
> > > > > 
> > > > > Perhaps here:
> > > > > __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
> > > > > __napi_alloc_frag() will all call page_frag_alloc(), which will use
> > > > > __page_frag_cache_refill() to get an order 3 page if possible, or fall
> > > > > back to an order 0 page if order 3 page is not available.
> > > > > 
> > > > > I'm not sure if your workload will use the above code path though.    
> > > > 
> > > > TL;DR: this is order-0 pages (code-walk trough proof below)
> > > > 
> > > > To Aaron, the network stack *can* call __free_pages_ok() with order-0
> > > > pages, via:
> > > > 
> > > > static void skb_free_head(struct sk_buff *skb)
> > > > {
> > > > 	unsigned char *head = skb->head;
> > > > 
> > > > 	if (skb->head_frag)
> > > > 		skb_free_frag(head);
> > > > 	else
> > > > 		kfree(head);
> > > > }
> > > > 
> > > > static inline void skb_free_frag(void *addr)
> > > > {
> > > > 	page_frag_free(addr);
> > > > }
> > > > 
> > > > /*
> > > >  * Frees a page fragment allocated out of either a compound or order 0 page.
> > > >  */
> > > > void page_frag_free(void *addr)
> > > > {
> > > > 	struct page *page = virt_to_head_page(addr);
> > > > 
> > > > 	if (unlikely(put_page_testzero(page)))
> > > > 		__free_pages_ok(page, compound_order(page));
> > > > }
> > > > EXPORT_SYMBOL(page_frag_free);    
> > > 
> > > I think here is a problem - order 0 pages are freed directly to buddy,
> > > bypassing per-cpu-pages. This might be the reason lock contention
> > > appeared on free path.   
> > 
> > OMG - you just found a significant issue with the network stacks
> > interaction with the page allocator!  This explains why I could not get
> > the PCP (Per-Cpu-Pages) system to have good performance, in my
> > performance networking benchmarks. As we are basically only using the
> > alloc side of PCP, and not the free side.  
> 
> Exactly.
> 
> >  We have spend years adding different driver level recycle tricks to
> > avoid this code path getting activated, exactly because it is rather
> > slow and problematic that we hit this zone->lock.  
> 
> I can see when this code path is hit, it causes unnecessary taking of
> zone lock for order-0 pages and cause lock contention.
> 
> >   
> > > Can someone apply below diff and see if lock contention is gone?  
> > 
> > I have also applied and tested this patch, and yes the lock contention
> > is gone.  As mentioned is it rather difficult to hit this code path, as
> > the driver page recycle mechanism tries to hide/avoid it, but mlx5 +
> > page_pool + CPU-map recycling have a known weakness that bypass the
> > driver page recycle scheme (that I've not fixed yet).  I observed a 7%
> > speedup for this micro benchmark.  
> 
> Good to know this, I will prepare a formal patch.

I wonder if this code is still missing something. I was looking at
using put_devmap_managed_page() infrastructure, but I realized that
page_frag_free() is also skipping this code path.  I guess, I can add
it later when I show/proof (performance wise) that this is a good idea
(as we currently don't have any users).


> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index e2ef1c17942f..65c0ae13215a 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
> > >  {
> > >  	struct page *page = virt_to_head_page(addr);
> > >  
> > > -	if (unlikely(put_page_testzero(page)))
> > > -		__free_pages_ok(page, compound_order(page));
> > > +	if (unlikely(put_page_testzero(page))) {
> > > +		unsigned int order = compound_order(page);
> > > +
> > > +		if (order == 0)
> > > +			free_unref_page(page);
> > > +		else
> > > +			__free_pages_ok(page, order);
> > > +	}
> > >  }
> > >  EXPORT_SYMBOL(page_frag_free);  
> > 
> > Thank you Aaron for spotting this!!!  
> 
> Which is impossible without your analysis :-)



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

  reply	other threads:[~2018-11-05 18:29 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-31 21:57 Kernel 4.19 network performance - forwarding/routing normal users traffic Paweł Staszewski
2018-10-31 22:09 ` Eric Dumazet
2018-10-31 22:20   ` Paweł Staszewski
2018-10-31 22:45     ` Paweł Staszewski
2018-11-01  9:22     ` Jesper Dangaard Brouer
2018-11-01 10:34       ` Paweł Staszewski
2018-11-01 15:27       ` Aaron Lu
2018-11-01 20:23         ` Saeed Mahameed
2018-11-02  5:23           ` Aaron Lu
2018-11-02 11:40             ` Jesper Dangaard Brouer
2018-11-02 14:20               ` Aaron Lu
2018-11-02 19:02                 ` Paweł Staszewski
2018-11-03  0:16                   ` Paweł Staszewski
2018-11-03 12:01                     ` Paweł Staszewski
2018-11-03 12:58                     ` Jesper Dangaard Brouer
2018-11-03 15:23                       ` Paweł Staszewski
2018-11-03 15:43                         ` Paweł Staszewski
2018-11-03 12:53                 ` Jesper Dangaard Brouer
2018-11-05  6:28                   ` Aaron Lu
2018-11-05  9:10                     ` Jesper Dangaard Brouer [this message]
2018-11-05  8:42                   ` Tariq Toukan
2018-11-05  8:48                     ` Aaron Lu
2018-11-01  3:37 ` David Ahern
2018-11-01 10:55   ` Jesper Dangaard Brouer
2018-11-01 13:52     ` Paweł Staszewski
2018-11-01 17:23       ` David Ahern
2018-11-01 17:30         ` Paweł Staszewski
2018-11-03 17:32           ` David Ahern
2018-11-04  0:24             ` Paweł Staszewski
2018-11-05 20:17               ` Jesper Dangaard Brouer
2018-11-08  0:59                 ` Paweł Staszewski
2018-11-08  1:13                   ` Paweł Staszewski
2018-11-08 14:43                   ` Paweł Staszewski
2018-11-07 21:06               ` David Ahern
2018-11-08 13:33                 ` Paweł Staszewski
2018-11-08 16:06                   ` David Ahern
2018-11-08 16:25                     ` Paweł Staszewski
2018-11-08 16:27                       ` Paweł Staszewski
2018-11-08 16:32                         ` David Ahern
2018-11-08 17:30                           ` Paweł Staszewski
2018-11-08 18:05                             ` David Ahern
2018-11-09  0:40                           ` Paweł Staszewski
2018-11-09  0:42                             ` David Ahern
2018-11-09  4:52                               ` Saeed Mahameed
2018-11-09  7:52                                 ` Jesper Dangaard Brouer
2018-11-09  9:56                                 ` Paweł Staszewski
2018-11-09 10:20                     ` Paweł Staszewski
2018-11-09 16:21                       ` David Ahern
2018-11-09 19:59                         ` Paweł Staszewski
2018-11-10  0:06                         ` David Ahern
2018-11-10 13:18                           ` Paweł Staszewski
2018-11-10 14:56                             ` David Ahern
2018-11-19 21:59                           ` David Ahern
2018-11-20 23:00                             ` Paweł Staszewski
2018-11-01  9:50 ` Saeed Mahameed
2018-11-01 11:09   ` Paweł Staszewski
2018-11-01 16:49     ` Paweł Staszewski
2018-11-01 20:37     ` Saeed Mahameed
2018-11-01 21:18       ` Paweł Staszewski
2018-11-01 21:24         ` Paweł Staszewski
2018-11-01 21:34           ` Paweł Staszewski
2018-11-03  0:18       ` Paweł Staszewski
2018-11-08 19:12         ` Paweł Staszewski
2018-11-09 22:20           ` Paweł Staszewski
2018-11-10 19:34             ` Jesper Dangaard Brouer
2018-11-10 19:49               ` Paweł Staszewski
2018-11-10 19:56                 ` Paweł Staszewski
2018-11-10 22:06                   ` Jesper Dangaard Brouer
2018-11-10 22:19                     ` Paweł Staszewski
2018-11-11  8:03                       ` Jesper Dangaard Brouer
2018-11-11 10:26                         ` Paweł Staszewski
2018-11-10 20:02               ` Paweł Staszewski
2018-11-10 21:01                 ` Jesper Dangaard Brouer
2018-11-10 21:53                   ` Paweł Staszewski
2018-11-10 22:04                     ` Paweł Staszewski
2018-11-11  8:56                     ` Jesper Dangaard Brouer
2018-11-12 19:19                       ` Paweł Staszewski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181105101042.269292ae@redhat.com \
    --to=brouer@redhat.com \
    --cc=aaron.lu@intel.com \
    --cc=eric.dumazet@gmail.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jglisse@redhat.com \
    --cc=mgorman@techsingularity.net \
    --cc=netdev@vger.kernel.org \
    --cc=pstaszewski@itcare.pl \
    --cc=saeedm@mellanox.com \
    --cc=tariqt@mellanox.com \
    --cc=yoel@kviknet.dk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.