From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 516D3C001B0 for ; Mon, 10 Jul 2023 23:49:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230331AbjGJXtd (ORCPT ); Mon, 10 Jul 2023 19:49:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230048AbjGJXta (ORCPT ); Mon, 10 Jul 2023 19:49:30 -0400 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CE6E1A8 for ; Mon, 10 Jul 2023 16:49:28 -0700 (PDT) Received: by mail-qv1-xf30.google.com with SMTP id 6a1803df08f44-635dbfa710dso30181676d6.0 for ; Mon, 10 Jul 2023 16:49:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; t=1689032967; x=1691624967; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=C1+7UfAMLI/Nu08LQwOr0R45kQ38N0MvpsViC9OWNis=; b=lOjm4Lx/2UPcQ0vl1fIalkQvnZK8D4gJI18w5QRFNsP45noSZ/Hlwqq9fLf6qno8S0 GvM986sAXOOWD+3KlsW4spYFl1h5PPJyrFeDcdlNkEDclq35TvtosJh62CTCTkoRD9GK MqGXs6RSufFFgYmbGJAlQr1lrEr/1f2OYUXg5i3nLAJvb7ECX/2/AfZpA3lyMOZeND1n lFl8txf7CyG92a7kngcJ7Vq6bV3YMDykveds9sCnMS+V3OZ/EyLY8FCZCXVFPUP4B0LH qdqAHT53dGIFQ6ev9mJkB5oMnZhst31KWNYei6EFEyMLWMXAOhxl5/HniRY/5EQC+kez oVfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689032967; x=1691624967; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=C1+7UfAMLI/Nu08LQwOr0R45kQ38N0MvpsViC9OWNis=; b=ZENmCFJCfZrc/urs9NGYegqUMiJNGt9vOZQpfjFtL1Gb2byphpVe92yI/WK4aF1uOf o0vbICmpMmECWuOPt39IxKUdY+TxJIbjid/0G/j2W5vD/Ek2Dt4LxJXW3upFFiZWIN8Y coyCOXxOAPs3P1ct8D+5HXIIXLZ3MBpgiJkSf2282+ljA2LKkdAcdeq4FNRwk6QOLFkT uSvO6KqDYut38JOgf7nn+AvjmIDwbaSsfdGjxu8N7/1A0aehKTjRL8QtZSM3jSmcg6/M hcGIedN+tuz6O+go2WCwn1tha0HPzxLzBbPrOewQBG1Ruzsa1WtCLOFvsBIQx9OyILuy nOnw== X-Gm-Message-State: ABy/qLYRMWhZ5hpeuYM0RLM91YpSeQuaZuiRoY90IlqW/5IBDscwDqB2 fX9yWmVZWh9Xqo9dVbl4SPWNnA== X-Google-Smtp-Source: APBJJlEaSNhVB0rSanYyr8pEV+vVSyEFUXx0WrPVnHkD5VoRVa+WNmm9a90Aeu3n/gnPT2vgrqrtcw== X-Received: by 2002:a0c:b3ce:0:b0:636:fda0:a23 with SMTP id b14-20020a0cb3ce000000b00636fda00a23mr10701191qvf.27.1689032967061; Mon, 10 Jul 2023 16:49:27 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-142-68-25-194.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.68.25.194]) by smtp.gmail.com with ESMTPSA id u14-20020a0c8dce000000b00632191a70a2sm370778qvb.103.2023.07.10.16.49.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 16:49:26 -0700 (PDT) Received: from jgg by wakko with local (Exim 4.95) (envelope-from ) id 1qJ0dB-00063Y-Eu; Mon, 10 Jul 2023 20:49:25 -0300 Date: Mon, 10 Jul 2023 20:49:25 -0300 From: Jason Gunthorpe To: Mina Almasry , Christoph Hellwig , John Hubbard , Dan Williams Cc: David Ahern , Jakub Kicinski , Jesper Dangaard Brouer , brouer@redhat.com, Alexander Duyck , Yunsheng Lin , davem@davemloft.net, pabeni@redhat.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Lorenzo Bianconi , Yisen Zhuang , Salil Mehta , Eric Dumazet , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Saeed Mahameed , Leon Romanovsky , Felix Fietkau , Ryder Lee , Shayne Chen , Sean Wang , Kalle Valo , Matthias Brugger , AngeloGioacchino Del Regno , Jesper Dangaard Brouer , Ilias Apalodimas , linux-rdma@vger.kernel.org, linux-wireless@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, Jonathan Lemon Subject: Re: Memory providers multiplexing (Was: [PATCH net-next v4 4/5] page_pool: remove PP_FLAG_PAGE_FRAG flag) Message-ID: References: <20230616122140.6e889357@kernel.org> <20230619110705.106ec599@kernel.org> <5e0ac5bb-2cfa-3b58-9503-1e161f3c9bd5@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 10, 2023 at 04:02:59PM -0700, Mina Almasry wrote: > On Mon, Jul 10, 2023 at 10:44 AM Jason Gunthorpe wrote: > > > > On Wed, Jul 05, 2023 at 06:17:39PM -0700, Mina Almasry wrote: > > > > > Another issue is that in networks with low MTU, we could be DMAing > > > 1400/1500 bytes into each allocation, which is problematic if the > > > allocation is 8K+. I would need to investigate a bit to see if/how to > > > solve that, and we may end up having to split the page and again run > > > into the 'not enough room in struct page' problem. > > > > You don't have an intree driver to use this with, so who knows, but > > the out of tree GPU drivers tend to use a 64k memory management page > > size, and I don't expect you'd make progress with a design where a 64K > > naturaly sized allocator is producing 4k/8k non-compound pages just > > for netdev. We are still struggling with pagemap support for variable > > page size folios, so there is a bunch of technical blockers before > > drivers could do this. > > > > This is why it is so important to come with a complete in-tree > > solution, as we cannot review this design if your work is done with > > hacked up out of tree drivers. > > > > I think you're assuming the proposal requires dma-buf exporter driver > changes, and I have a 'hacked up out of tree driver' not visible to > you. Oh, I thought it was obvious what you did in patch 1 was a total non-starter when I said you can't abuse the ZONE_DEVICE pages like this. You must create ZONE_DEVICE P2P pages, not MEMORY_DEVICE_PRIVATE to represent P2P memory, and you can't do that automatically from the dmabuf core code. Without doing this the DMA API doesn't actually work properly because it doesn't have enough information to know about what the underlying exporter is. The entire point of DEVICE_PRIVATE is that the page content, and access to the page's physical location, is *explicitly* unavailable to anyone but the pgmap owner. > > Fully and properly adding P2P ZONE_DEVICE to a real world driver is a > > pretty big ask still. > > There is no such ask. Well, there is from me if you want to use stuct pages as handles for P2P memory. That is what we have defined in the pgmap area. Also I should warn you that your 'option 2' is being NAK'd by Christoph for now, we are not adding any new code around DMABUF's hacky use of NULL sg_page scatterlist for P2P memory either. I've been working on solutions here but it is slow going. > On dma-buf changes required. I do need approval from the dma-buf > maintainers, It is a neat hack, of sorts, to abuse DEVICE_PRIVATE to create struct pages for the exclusive use of pagepool - but you need more approval than just dmabuf maintainers to abuse the pgmap framework like this. At least from my position I want to see MEMORY_DEVICE_PCI_P2PDMA used to represent P2P memory. You haven't CC'd anyone from the mm community so I've added some people here to see if there are other opinions. To be clear, you need an explicit ack from mm people on the abusive use of pgmap in patch 1. I know it is not what you want to hear, but there are actual reasons why the P2P DMA problem has been festering for so long, and hacky quick fixes like this are not going to be enough.. Jason