From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94737C04A95 for ; Tue, 7 Nov 2023 21:57:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232412AbjKGV5J (ORCPT ); Tue, 7 Nov 2023 16:57:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229769AbjKGV5H (ORCPT ); Tue, 7 Nov 2023 16:57:07 -0500 Received: from mail-yw1-x112d.google.com (mail-yw1-x112d.google.com [IPv6:2607:f8b0:4864:20::112d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3159A10E2 for ; Tue, 7 Nov 2023 13:57:05 -0800 (PST) Received: by mail-yw1-x112d.google.com with SMTP id 00721157ae682-5afa5dbc378so69175887b3.0 for ; Tue, 07 Nov 2023 13:57:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699394224; x=1699999024; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=za6M8Y2d+jQ3Qt9gKNkmeDGWA71k/TCax11+8YIBTGE=; b=CUOVXhdcdM/DQfztHuawAMT/L1AEUiCtEV2qbEKvyBblelqOQ7FpmGm4sHhCGxAeoe oP6P8k57YbO7+F6hhZ9vXMM1mS9jEDh2/SYP5nAb97NkFWkbTi0GFdUJ6BS1oILnmWwz EExeVBxBu+WYAg8g4urdIcThxuVtP8FW3wM2u/KF9qnt3ZF2cVhWAo0dKZQbc7OT1Jm+ LWMiyXp1A56lWxWQR6gk9WhQ+nb3FNLTJE6AkD2heKaNp4fKArs0VzfO+rsRA6umIfHM 6si0lUZgbIAm/xkJsDPmfyna+uGmHUF6GYOmFTZSHHd49uT2FjkNkK+GvtqSz2pLZakA 8r/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699394224; x=1699999024; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=za6M8Y2d+jQ3Qt9gKNkmeDGWA71k/TCax11+8YIBTGE=; b=ZAfwKXqvrCJiUbBUH2sh79lG64chykdzKHsdi4pIaXma1KkziDf48vI4h2exFMxY/P IXicZ8f8qgRL2jHq7jesLG8YdZQuqTexqoK3SHnhC7ai2KsgAwqaDocna4xWjXAmcGQz eeBJNkezl1QwunY0+b9Lz3jR2G3Ut+jcrfmmfphxxKOaAbCV+u5fLCX+y5etOdGXFW1n gWtOWCaEvuR3SINyBqquD1/S49QPmnVCvKps+JZ1lXZBXI28sn5s2xHQFyind271nS3P 87bzLXvsoJfrZMdm/xhJju5upnoW+VRezp1KFd/+MoijfGn4GGG4f6celdcdw0dbwNm7 w5Mw== X-Gm-Message-State: AOJu0Yw/kZQKSJWB33Z9d6zJCZ427zLMXRsMsqyvaztPvJWdPqpteV4h deExDHD0ZAMhJEEapdds06rkoMOutOFrBrliz7bOPw== X-Google-Smtp-Source: AGHT+IF/bsw99ICNUinQH8HJWUTE/TqZCFPYGxxxKhtjJLazoPWjgkibzNnUub2UFOZbJnwmVInaoZNQXuN3J7zXnD4= X-Received: by 2002:a05:690c:15:b0:5b3:3eb5:6624 with SMTP id bc21-20020a05690c001500b005b33eb56624mr13539795ywb.46.1699394224143; Tue, 07 Nov 2023 13:57:04 -0800 (PST) MIME-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> <20231106024413.2801438-8-almasrymina@google.com> <4a0e9d53-324d-e19b-2a30-ba86f9e5569e@huawei.com> In-Reply-To: <4a0e9d53-324d-e19b-2a30-ba86f9e5569e@huawei.com> From: Mina Almasry Date: Tue, 7 Nov 2023 13:56:51 -0800 Message-ID: Subject: Re: [RFC PATCH v3 07/12] page-pool: device memory support To: Yunsheng Lin Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , =?UTF-8?Q?Christian_K=C3=B6nig?= , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 7, 2023 at 12:00=E2=80=AFAM Yunsheng Lin wrote: > > On 2023/11/6 10:44, Mina Almasry wrote: > > Overload the LSB of struct page* to indicate that it's a page_pool_iov. > > > > Refactor mm calls on struct page* into helpers, and add page_pool_iov > > handling on those helpers. Modify callers of these mm APIs with calls t= o > > these helpers instead. > > > > In areas where struct page* is dereferenced, add a check for special > > handling of page_pool_iov. > > > > Signed-off-by: Mina Almasry > > > > --- > > include/net/page_pool/helpers.h | 74 ++++++++++++++++++++++++++++++++- > > net/core/page_pool.c | 63 ++++++++++++++++++++-------- > > 2 files changed, 118 insertions(+), 19 deletions(-) > > > > diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/he= lpers.h > > index b93243c2a640..08f1a2cc70d2 100644 > > --- a/include/net/page_pool/helpers.h > > +++ b/include/net/page_pool/helpers.h > > @@ -151,6 +151,64 @@ static inline struct page_pool_iov *page_to_page_p= ool_iov(struct page *page) > > return NULL; > > } > > > > +static inline int page_pool_page_ref_count(struct page *page) > > +{ > > + if (page_is_page_pool_iov(page)) > > + return page_pool_iov_refcount(page_to_page_pool_iov(page)= ); > > We have added a lot of 'if' for the devmem case, it would be better to > make it more generic so that we can have more unified metadata handling > for normal page and devmem. If we add another memory type here, do we > need another 'if' here? Maybe, not sure. I'm guessing new memory types will either be pages or iovs, so maybe no new if statements needed. > That is part of the reason I suggested using a more unified metadata for > all the types of memory chunks used by page_pool. I think your suggestion was to use struct pages for devmem. That was thoroughly considered and intensely argued about in the initial conversations regarding devmem and the initial RFC, and from the conclusions there it's extremely clear to me that devmem struct pages are categorically a no-go. -- Thanks, Mina