From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 360E2C4167B for ; Fri, 10 Nov 2023 02:59:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345788AbjKJC7Z (ORCPT ); Thu, 9 Nov 2023 21:59:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229491AbjKJC7W (ORCPT ); Thu, 9 Nov 2023 21:59:22 -0500 Received: from mail-ua1-x934.google.com (mail-ua1-x934.google.com [IPv6:2607:f8b0:4864:20::934]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06C694229 for ; Thu, 9 Nov 2023 18:59:20 -0800 (PST) Received: by mail-ua1-x934.google.com with SMTP id a1e0cc1a2514c-7b9c8706fc1so660361241.0 for ; Thu, 09 Nov 2023 18:59:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699585156; x=1700189956; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=cYA5rJpCEqdhu1Dq/5dPoAwsI72d8BAvhnJXT+T4oos=; b=FGLTudEtOtFLIoL/GgWV+ido8gkPePNSUayf3jA0TleplXe5ND+6j+yHdWxD5mQ5c7 Hkcols3CmcF/qhhySrbQd27M5TWfAW9Jds6COx/zPN8yYGGqmfjETSwFBn4WiVebY0ZG CYw4yEwu88nhX894adxRz7OGl8YbUQtB+R4hrCu5Y3TcBqVuBPMFpj9BO+TrVq6bDb8e jh897MsIQuhJ0E5UVhvGpZLU0L8dI1q0/xgB5RDX6NLRRqbh4xqUPmVWi9gOZOpXP3dr kJsQEhtm1C+WNEPY0sYxX9me0I6Lr8q/t4J4OpDf9aSfunXhGyDIDqDqf0YGZFe4PBGn I3VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699585156; x=1700189956; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cYA5rJpCEqdhu1Dq/5dPoAwsI72d8BAvhnJXT+T4oos=; b=Bf69uAJV/MJCdE75/YFTGqgPHr6QaQSCqf8sZ3Cf95bU4F+6CkEpXii9s6098U75IH fqI/VgkaFWWQ0bnrRe6Kd9EUouHOLz26Whllr2hAb0px2PWZHfYBFxxNJBoInfDffy/2 M4eVPleexFSZ/dExEjLV69E1jUP8+SP+K15qGOJrQUsV7BSVjGLz3wJidjxbi4x/lVEP pa5pWH3Wo6f2u+DjWJ84QwFULs2cUr+9edtmr9prZp1e2w72zVDtKo7W4UHR9iOjpKSO iBlOqlgsyyJGbYJGKoURD2kiFc0+Te6ZjTdaSlGlgeRGS4bsa46x4u6joYr5t+xKoc/0 d3hQ== X-Gm-Message-State: AOJu0Yw73mNYYuH8DwNfPK9Sx5UOszMUD7sbK08DVcZdUXgQUyBwOIgp NaFKvhU1IPdBv0jxJHXMVB5Xpd/m3SeIUAWFaf6F+A== X-Google-Smtp-Source: AGHT+IFOryz0B149KznbXpgkBi69phioiLVC4R9PWdX35ZPDwW1mQdW7sr2TdvnDfhWAyV20CN9DNjJoJE0dCWxIRHw= X-Received: by 2002:a05:6102:205a:b0:45e:fe97:70a8 with SMTP id q26-20020a056102205a00b0045efe9770a8mr6645348vsr.22.1699585155651; Thu, 09 Nov 2023 18:59:15 -0800 (PST) MIME-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> <20231106024413.2801438-5-almasrymina@google.com> <076fa6505f3e1c79cc8acdf9903809fad6c2fd31.camel@redhat.com> In-Reply-To: <076fa6505f3e1c79cc8acdf9903809fad6c2fd31.camel@redhat.com> From: Mina Almasry Date: Thu, 9 Nov 2023 18:59:04 -0800 Message-ID: Subject: Re: [RFC PATCH v3 04/12] netdev: support binding dma-buf to netdevice To: Paolo Abeni Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , =?UTF-8?Q?Christian_K=C3=B6nig?= , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 9, 2023 at 12:30=E2=80=AFAM Paolo Abeni wro= te: > > I'm trying to wrap my head around the whole infra... the above line is > confusing. Why do you increment dma_addr? it will be re-initialized in > the next iteration. > That is just a mistake, sorry. Will remove this increment. On Thu, Nov 9, 2023 at 1:29=E2=80=AFAM Yunsheng Lin wrote:> >>> > >>> gen_pool_destroy BUG_ON() if it's not empty at the time of destroying= . > >>> Technically that should never happen, because > >>> __netdev_devmem_binding_free() should only be called when the refcoun= t > >>> hits 0, so all the chunks have been freed back to the gen_pool. But, > >>> just in case, I don't want to crash the server just because I'm > >>> leaking a chunk... this is a bit of defensive programming that is > >>> typically frowned upon, but the behavior of gen_pool is so severe I > >>> think the WARN() + check is warranted here. > >> > >> It seems it is pretty normal for the above to happen nowadays because = of > >> retransmits timeouts, NAPI defer schemes mentioned below: > >> > >> https://lkml.kernel.org/netdev/168269854650.2191653.846525980849826981= 5.stgit@firesoul/ > >> > >> And currently page pool core handles that by using a workqueue. > > > > Forgive me but I'm not understanding the concern here. > > > > __netdev_devmem_binding_free() is called when binding->ref hits 0. > > > > binding->ref is incremented when an iov slice of the dma-buf is > > allocated, and decremented when an iov is freed. So, > > __netdev_devmem_binding_free() can't really be called unless all the > > iovs have been freed, and gen_pool_size() =3D=3D gen_pool_avail(), > > regardless of what's happening on the page_pool side of things, right? > > I seems to misunderstand it. In that case, it seems to be about > defensive programming like other checking. > > By looking at it more closely, it seems napi_frag_unref() call > page_pool_page_put_many() directly=EF=BC=8C which means devmem seems to > be bypassing the napi_safe optimization. > > Can napi_frag_unref() reuse napi_pp_put_page() in order to reuse > the napi_safe optimization? > I think it already does. page_pool_page_put_many() is only called if !recycle or !napi_pp_put_page(). In that case page_pool_page_put_many() is just a replacement for put_page(), because this 'page' may be an iov. --=20 Thanks, Mina