From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 115CBC48BD4 for ; Tue, 25 Jun 2019 07:19:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D191920863 for ; Tue, 25 Jun 2019 07:19:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727882AbfFYHS5 (ORCPT ); Tue, 25 Jun 2019 03:18:57 -0400 Received: from verein.lst.de ([213.95.11.211]:60200 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726419AbfFYHS4 (ORCPT ); Tue, 25 Jun 2019 03:18:56 -0400 Received: by newverein.lst.de (Postfix, from userid 2407) id 0E82368B02; Tue, 25 Jun 2019 09:18:24 +0200 (CEST) Date: Tue, 25 Jun 2019 09:18:23 +0200 From: Christoph Hellwig To: Logan Gunthorpe Cc: Jason Gunthorpe , Christoph Hellwig , Dan Williams , Linux Kernel Mailing List , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, linux-rdma , Jens Axboe , Bjorn Helgaas , Sagi Grimberg , Keith Busch , Stephen Bates Subject: Re: [RFC PATCH 00/28] Removing struct page from P2PDMA Message-ID: <20190625071823.GA30350@lst.de> References: <20190620161240.22738-1-logang@deltatee.com> <20190620193353.GF19891@ziepe.ca> <20190624073126.GB3954@lst.de> <20190624134641.GA8268@ziepe.ca> <1041d2c6-f22c-81f2-c141-fb821b35c0c1@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1041d2c6-f22c-81f2-c141-fb821b35c0c1@deltatee.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org On Mon, Jun 24, 2019 at 10:10:16AM -0600, Logan Gunthorpe wrote: > Yes, that's correct. The intent was to invert it so the dma_map could > happen at the start of the process so that P2PDMA code could be called > with all the information it needs to make it's decision on how to map; > without having to hook into the mapping process of every driver that > wants to participate. And that just isn't how things work in layering. We need to keep generating the dma addresses in the driver in the receiving end, as there are all kinds of interesting ideas how we do that. E.g. for the Mellanox NICs addressing their own bars is not done by PCIe bus addresses but by relative offsets. And while NVMe has refused to go down that route in the current band aid fix for CMB addressing I suspect it will sooner or later have to do the same to deal with the addressing problems in a multiple PASID world.