From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ilias Apalodimas Subject: Re: [PATCH v2 net-next 0/8] dpaa2-eth: Introduce XDP support Date: Thu, 13 Dec 2018 20:47:17 +0200 Message-ID: <20181213184717.GA8436@apalos> References: <1543249591-14563-1-git-send-email-ruxandra.radulescu@nxp.com> <20181205164502.5b11ff7e@redhat.com> <20181207172016.GA21965@apalos> <20181207175135.GA22649@apalos> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Jesper Dangaard Brouer , "netdev@vger.kernel.org" , "davem@davemloft.net" , Ioana Ciornei , "dsahern@gmail.com" , Camelia Alexandra Groza To: Ioana Ciocoi Radulescu Return-path: Received: from mail-wr1-f48.google.com ([209.85.221.48]:37035 "EHLO mail-wr1-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726453AbeLMSrX (ORCPT ); Thu, 13 Dec 2018 13:47:23 -0500 Received: by mail-wr1-f48.google.com with SMTP id s12so2652225wrt.4 for ; Thu, 13 Dec 2018 10:47:22 -0800 (PST) Content-Disposition: inline In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: Hi Ioanna > > > > > > Well if you don't have to use 64kb pages you can use the page_pool API > > (only > > > used from mlx5 atm) and get the xdp recycling for free. The memory > > 'waste' > > > for > > > 4kb pages isn't too much if the platforms the driver sits on have decent > > > amounts > > > of memory (and the number of descriptors used is not too high). > > > We still have work in progress with Jesper (just posted an RFC)with > > > improvements > > > on the API. > > > Using it is fairly straightforward. This is a patchset on marvell's mvneta > > > driver with the API changes needed: > > > https://www.spinics.net/lists/netdev/msg538285.html > > > > > > If you need 64kb pages you would have to introduce page recycling and > > > sharing > > > like intel/mlx drivers on your driver. > > > > Thanks a lot for the info, will look into this. Do you have any pointers > > as to why the full page restriction exists in the first place? Sorry if it's > > a dumb question, but I haven't found details on this and I'd really like > > to understand it. > > After a quick glance, not sure we can use page_pool API. > > The problem is our driver is not ring-based: we have a single > buffer pool used by all Rx queues, so using page_pool allocations > would imply adding a layer of synchronization in our driver. We had similar concerns a while ago. Have a look at: https://www.spinics.net/lists/netdev/msg481494.html https://www.mail-archive.com/netdev@vger.kernel.org/msg236820.html Jesper and i have briefly discussed on this and this type of hardware is something we need to consider for page_pool API. > > I'm still trying to figure out how deep is the trouble we're in > for not using single page per packet in our driver, considering > we don't support XDP_REDIRECT yet. Guess I'll wait for Jasper's > answer for this. I might be wrong, but i don't think anything apart from performance will go 'break', since no memory is sent to the userspace (no XDP_REDIRECT implemented). Jesper will probably be able to think of any corner cases i might be ignoring. Then again you write a driver, test it and you'll end up rewriting and re-testing if you ever need the feature. /Ilias