From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756689AbcK2KeU (ORCPT ); Tue, 29 Nov 2016 05:34:20 -0500 Received: from mail-io0-f179.google.com ([209.85.223.179]:36713 "EHLO mail-io0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754485AbcK2KeL (ORCPT ); Tue, 29 Nov 2016 05:34:11 -0500 MIME-Version: 1.0 In-Reply-To: <87shqafv07.fsf@free-electrons.com> References: <87shqafv07.fsf@free-electrons.com> From: Marcin Wojtas Date: Tue, 29 Nov 2016 11:34:10 +0100 Message-ID: Subject: Re: [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address To: Gregory CLEMENT Cc: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Jisheng Zhang , Arnd Bergmann , Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Thomas Petazzoni , "linux-arm-kernel@lists.infradead.org" , Nadav Haklai , Dmitri Epshtein , Yelena Krivosheev Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Gregory, 2016-11-29 11:19 GMT+01:00 Gregory CLEMENT : > Hi Marcin, > > On mar., nov. 29 2016, Marcin Wojtas wrote: > >> Hi Gregory, >> >> Another remark below, sorry for noise. >> >> 2016-11-29 10:37 GMT+01:00 Gregory CLEMENT : >>> Until now the virtual address of the received buffer were stored in the >>> cookie field of the rx descriptor. However, this field is 32-bits only >>> which prevents to use the driver on a 64-bits architecture. >>> >>> With this patch the virtual address is stored in an array not shared with >>> the hardware (no more need to use the DMA API). Thanks to this, it is >>> possible to use cache contrary to the access of the rx descriptor member. >>> >>> The change is done in the swbm path only because the hwbm uses the cookie >>> field, this also means that currently the hwbm is not usable in 64-bits. >>> >>> Signed-off-by: Gregory CLEMENT >>> --- >>> drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++---- >>> 1 file changed, 81 insertions(+), 12 deletions(-) >>> >>> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c >>> index 1b84f746d748..32b142d0e44e 100644 >>> --- a/drivers/net/ethernet/marvell/mvneta.c >>> +++ b/drivers/net/ethernet/marvell/mvneta.c >>> @@ -561,6 +561,9 @@ struct mvneta_rx_queue { >>> u32 pkts_coal; >>> u32 time_coal; >>> >>> + /* Virtual address of the RX buffer */ >>> + void **buf_virt_addr; >>> + >>> /* Virtual address of the RX DMA descriptors array */ >>> struct mvneta_rx_desc *descs; >>> >>> @@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp, >>> >>> /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */ >>> static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc, >>> - u32 phys_addr, u32 cookie) >>> + u32 phys_addr, void *virt_addr, >>> + struct mvneta_rx_queue *rxq) >>> { >>> - rx_desc->buf_cookie = cookie; >>> + int i; >>> + >>> rx_desc->buf_phys_addr = phys_addr; >>> + i = rx_desc - rxq->descs; >>> + rxq->buf_virt_addr[i] = virt_addr; >>> } >>> >>> /* Decrement sent descriptors counter */ >>> @@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free); >>> >>> /* Refill processing for SW buffer management */ >>> static int mvneta_rx_refill(struct mvneta_port *pp, >>> - struct mvneta_rx_desc *rx_desc) >>> + struct mvneta_rx_desc *rx_desc, >>> + struct mvneta_rx_queue *rxq) >>> >>> { >>> dma_addr_t phys_addr; >>> @@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp, >>> return -ENOMEM; >>> } >>> >>> - mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data); >>> + mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq); >>> return 0; >>> } >>> >>> @@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, >>> >>> for (i = 0; i < rxq->size; i++) { >>> struct mvneta_rx_desc *rx_desc = rxq->descs + i; >>> - void *data = (void *)rx_desc->buf_cookie; >>> + void *data; >>> + >>> + if (!pp->bm_priv) >>> + data = rxq->buf_virt_addr[i]; >>> + else >>> + data = (void *)(uintptr_t)rx_desc->buf_cookie; >> >> Dropping packets for HWBM (in fact returning dropped buffers to the >> pool) is done a couple of lines above. This point will never be > > indeed I changed the code at every place the buf_cookie was used and > missed the fact that for HWBM this code was never reached. > >> reached with HWBM enabled (and it's also incorrect). > > What is incorrect? > Possible dma_unmapping + mvneta_frag_free for buffers in HWBM, when dropping packets. Thanks, Marcin