All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nicolin Chen <nicolinc@nvidia.com>
To: Christoph Hellwig <hch@infradead.org>, Jason Gunthorpe <jgg@nvidia.com>
Cc: <kwankhede@nvidia.com>, <corbet@lwn.net>, <hca@linux.ibm.com>,
	<gor@linux.ibm.com>, <agordeev@linux.ibm.com>,
	<borntraeger@linux.ibm.com>, <svens@linux.ibm.com>,
	<zhenyuw@linux.intel.com>, <zhi.a.wang@intel.com>,
	<jani.nikula@linux.intel.com>, <joonas.lahtinen@linux.intel.com>,
	<rodrigo.vivi@intel.com>, <tvrtko.ursulin@linux.intel.com>,
	<airlied@linux.ie>, <daniel@ffwll.ch>, <farman@linux.ibm.com>,
	<mjrosato@linux.ibm.com>, <pasic@linux.ibm.com>,
	<vneethv@linux.ibm.com>, <oberpar@linux.ibm.com>,
	<freude@linux.ibm.com>, <akrowiak@linux.ibm.com>,
	<jjherne@linux.ibm.com>, <alex.williamson@redhat.com>,
	<cohuck@redhat.com>, <kevin.tian@intel.com>,
	<jchrist@linux.ibm.com>, <kvm@vger.kernel.org>,
	<linux-doc@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<linux-s390@vger.kernel.org>,
	<intel-gvt-dev@lists.freedesktop.org>,
	<intel-gfx@lists.freedesktop.org>,
	<dri-devel@lists.freedesktop.org>
Subject: Re: [RFT][PATCH v1 6/6] vfio: Replace phys_pfn with phys_page for vfio_pin_pages()
Date: Tue, 21 Jun 2022 14:47:37 -0700	[thread overview]
Message-ID: <YrI8eYsXemPgNBa2@Asurada-Nvidia> (raw)
In-Reply-To: <20220620153628.GA5502@nvidia.com>

On Mon, Jun 20, 2022 at 12:36:28PM -0300, Jason Gunthorpe wrote:
> On Sun, Jun 19, 2022 at 11:37:47PM -0700, Christoph Hellwig wrote:
> > On Sun, Jun 19, 2022 at 10:51:47PM -0700, Christoph Hellwig wrote:
> > > On Mon, Jun 20, 2022 at 12:00:46AM -0300, Jason Gunthorpe wrote:
> > > > On Fri, Jun 17, 2022 at 01:54:05AM -0700, Christoph Hellwig wrote:
> > > > > There is a bunch of code an comments in the iommu type1 code that
> > > > > suggest we can pin memory that is not page backed.  
> > > > 
> > > > AFAIK you can.. The whole follow_pte() mechanism allows raw PFNs to be
> > > > loaded into the type1 maps and the pin API will happily return
> > > > them. This happens in almost every qemu scenario because PCI MMIO BAR
> > > > memory ends up routed down this path.
> > > 
> > > Indeed, my read wasn't deep enough.  Which means that we can't change
> > > the ->pin_pages interface to return a struct pages array, as we don't
> > > have one for those.
> > 
> > Actually.  gvt requires a struct page, and both s390 seem to require
> > normal non-I/O, non-remapped kernel pointers.  So I think for the
> > vfio_pin_pages we can assume that we only want page backed memory and
> > remove the follow_fault_pfn case entirely.   But we'll probably have
> > to keep it for the vfio_iommu_replay case that is not tied to
> > emulated IOMMU drivers.
> 
> Right, that is my thinking - since all drivers actually need a struct
> page we should have the API return a working struct page and have the
> VFIO layer isolate the non-struct page stuff so it never leaks out of
> this API.
> 
> This nicely fixes the various problems in all the drivers if io memory
> comes down this path.
> 
> It is also why doing too much surgery deeper into type 1 probably
> isn't too worthwhile as it still needs raw pfns in its data
> structures for iommu modes.

Christoph, do you agree with Jason's remark on not doing too much
surgery into type1 code? Or do you still want this series to change
type1 like removing follow_fault_pfn() that you mentioned above?

WARNING: multiple messages have this Message-ID (diff)
From: Nicolin Chen <nicolinc@nvidia.com>
To: Christoph Hellwig <hch@infradead.org>, Jason Gunthorpe <jgg@nvidia.com>
Cc: mjrosato@linux.ibm.com, linux-doc@vger.kernel.org,
	airlied@linux.ie, kevin.tian@intel.com,
	dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	kwankhede@nvidia.com, vneethv@linux.ibm.com,
	agordeev@linux.ibm.com, linux-s390@vger.kernel.org,
	kvm@vger.kernel.org, corbet@lwn.net, pasic@linux.ibm.com,
	borntraeger@linux.ibm.com, intel-gfx@lists.freedesktop.org,
	zhi.a.wang@intel.com, jjherne@linux.ibm.com,
	farman@linux.ibm.com, jchrist@linux.ibm.com, gor@linux.ibm.com,
	hca@linux.ibm.com, alex.williamson@redhat.com,
	freude@linux.ibm.com, rodrigo.vivi@intel.com,
	intel-gvt-dev@lists.freedesktop.org, akrowiak@linux.ibm.com,
	tvrtko.ursulin@linux.intel.com, cohuck@redhat.com,
	oberpar@linux.ibm.com, svens@linux.ibm.com
Subject: Re: [RFT][PATCH v1 6/6] vfio: Replace phys_pfn with phys_page for vfio_pin_pages()
Date: Tue, 21 Jun 2022 14:47:37 -0700	[thread overview]
Message-ID: <YrI8eYsXemPgNBa2@Asurada-Nvidia> (raw)
In-Reply-To: <20220620153628.GA5502@nvidia.com>

On Mon, Jun 20, 2022 at 12:36:28PM -0300, Jason Gunthorpe wrote:
> On Sun, Jun 19, 2022 at 11:37:47PM -0700, Christoph Hellwig wrote:
> > On Sun, Jun 19, 2022 at 10:51:47PM -0700, Christoph Hellwig wrote:
> > > On Mon, Jun 20, 2022 at 12:00:46AM -0300, Jason Gunthorpe wrote:
> > > > On Fri, Jun 17, 2022 at 01:54:05AM -0700, Christoph Hellwig wrote:
> > > > > There is a bunch of code an comments in the iommu type1 code that
> > > > > suggest we can pin memory that is not page backed.  
> > > > 
> > > > AFAIK you can.. The whole follow_pte() mechanism allows raw PFNs to be
> > > > loaded into the type1 maps and the pin API will happily return
> > > > them. This happens in almost every qemu scenario because PCI MMIO BAR
> > > > memory ends up routed down this path.
> > > 
> > > Indeed, my read wasn't deep enough.  Which means that we can't change
> > > the ->pin_pages interface to return a struct pages array, as we don't
> > > have one for those.
> > 
> > Actually.  gvt requires a struct page, and both s390 seem to require
> > normal non-I/O, non-remapped kernel pointers.  So I think for the
> > vfio_pin_pages we can assume that we only want page backed memory and
> > remove the follow_fault_pfn case entirely.   But we'll probably have
> > to keep it for the vfio_iommu_replay case that is not tied to
> > emulated IOMMU drivers.
> 
> Right, that is my thinking - since all drivers actually need a struct
> page we should have the API return a working struct page and have the
> VFIO layer isolate the non-struct page stuff so it never leaks out of
> this API.
> 
> This nicely fixes the various problems in all the drivers if io memory
> comes down this path.
> 
> It is also why doing too much surgery deeper into type 1 probably
> isn't too worthwhile as it still needs raw pfns in its data
> structures for iommu modes.

Christoph, do you agree with Jason's remark on not doing too much
surgery into type1 code? Or do you still want this series to change
type1 like removing follow_fault_pfn() that you mentioned above?

WARNING: multiple messages have this Message-ID (diff)
From: Nicolin Chen <nicolinc@nvidia.com>
To: Christoph Hellwig <hch@infradead.org>, Jason Gunthorpe <jgg@nvidia.com>
Cc: mjrosato@linux.ibm.com, linux-doc@vger.kernel.org,
	airlied@linux.ie, dri-devel@lists.freedesktop.org,
	linux-kernel@vger.kernel.org, kwankhede@nvidia.com,
	vneethv@linux.ibm.com, agordeev@linux.ibm.com,
	linux-s390@vger.kernel.org, kvm@vger.kernel.org, corbet@lwn.net,
	pasic@linux.ibm.com, borntraeger@linux.ibm.com,
	intel-gfx@lists.freedesktop.org, jjherne@linux.ibm.com,
	farman@linux.ibm.com, jchrist@linux.ibm.com, gor@linux.ibm.com,
	hca@linux.ibm.com, freude@linux.ibm.com, rodrigo.vivi@intel.com,
	intel-gvt-dev@lists.freedesktop.org, akrowiak@linux.ibm.com,
	cohuck@redhat.com, oberpar@linux.ibm.com, svens@linux.ibm.com
Subject: Re: [Intel-gfx] [RFT][PATCH v1 6/6] vfio: Replace phys_pfn with phys_page for vfio_pin_pages()
Date: Tue, 21 Jun 2022 14:47:37 -0700	[thread overview]
Message-ID: <YrI8eYsXemPgNBa2@Asurada-Nvidia> (raw)
In-Reply-To: <20220620153628.GA5502@nvidia.com>

On Mon, Jun 20, 2022 at 12:36:28PM -0300, Jason Gunthorpe wrote:
> On Sun, Jun 19, 2022 at 11:37:47PM -0700, Christoph Hellwig wrote:
> > On Sun, Jun 19, 2022 at 10:51:47PM -0700, Christoph Hellwig wrote:
> > > On Mon, Jun 20, 2022 at 12:00:46AM -0300, Jason Gunthorpe wrote:
> > > > On Fri, Jun 17, 2022 at 01:54:05AM -0700, Christoph Hellwig wrote:
> > > > > There is a bunch of code an comments in the iommu type1 code that
> > > > > suggest we can pin memory that is not page backed.  
> > > > 
> > > > AFAIK you can.. The whole follow_pte() mechanism allows raw PFNs to be
> > > > loaded into the type1 maps and the pin API will happily return
> > > > them. This happens in almost every qemu scenario because PCI MMIO BAR
> > > > memory ends up routed down this path.
> > > 
> > > Indeed, my read wasn't deep enough.  Which means that we can't change
> > > the ->pin_pages interface to return a struct pages array, as we don't
> > > have one for those.
> > 
> > Actually.  gvt requires a struct page, and both s390 seem to require
> > normal non-I/O, non-remapped kernel pointers.  So I think for the
> > vfio_pin_pages we can assume that we only want page backed memory and
> > remove the follow_fault_pfn case entirely.   But we'll probably have
> > to keep it for the vfio_iommu_replay case that is not tied to
> > emulated IOMMU drivers.
> 
> Right, that is my thinking - since all drivers actually need a struct
> page we should have the API return a working struct page and have the
> VFIO layer isolate the non-struct page stuff so it never leaks out of
> this API.
> 
> This nicely fixes the various problems in all the drivers if io memory
> comes down this path.
> 
> It is also why doing too much surgery deeper into type 1 probably
> isn't too worthwhile as it still needs raw pfns in its data
> structures for iommu modes.

Christoph, do you agree with Jason's remark on not doing too much
surgery into type1 code? Or do you still want this series to change
type1 like removing follow_fault_pfn() that you mentioned above?

  reply	other threads:[~2022-06-21 21:47 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-16 23:52 [RFT][PATCH v1 0/6] Update vfio_pin/unpin_pages API Nicolin Chen
2022-06-16 23:52 ` Nicolin Chen
2022-06-16 23:52 ` [RFT][PATCH v1 1/6] vfio/ap: Pass in physical address of ind to ap_aqic() Nicolin Chen
2022-06-16 23:52   ` Nicolin Chen
2022-06-20 10:00   ` Harald Freudenberger
2022-06-20 10:00     ` Harald Freudenberger
2022-06-21 21:01     ` Nicolin Chen
2022-06-21 21:01       ` [Intel-gfx] " Nicolin Chen
2022-06-21 21:01       ` Nicolin Chen
2022-06-16 23:52 ` [RFT][PATCH v1 2/6] vfio/ccw: Only pass in contiguous pages Nicolin Chen
2022-06-16 23:52   ` Nicolin Chen
2022-06-16 23:52 ` [RFT][PATCH v1 3/6] vfio: Pass in starting IOVA to vfio_pin/unpin_pages API Nicolin Chen
2022-06-16 23:52   ` Nicolin Chen
2022-06-17  8:42   ` Christoph Hellwig
2022-06-17  8:42     ` [Intel-gfx] " Christoph Hellwig
2022-06-17 21:57     ` Nicolin Chen
2022-06-17 21:57       ` Nicolin Chen
2022-06-22  1:18     ` Nicolin Chen
2022-06-22  1:18       ` [Intel-gfx] " Nicolin Chen
2022-06-22  1:18       ` Nicolin Chen
2022-06-16 23:52 ` [RFT][PATCH v1 4/6] vfio: Rename user_iova of vfio_dma_rw() Nicolin Chen
2022-06-16 23:52   ` Nicolin Chen
2022-06-16 23:52 ` [RFT][PATCH v1 5/6] vfio/ccw: Add kmap_local_page() for memcpy Nicolin Chen
2022-06-16 23:52   ` Nicolin Chen
2022-06-17  8:44   ` Christoph Hellwig
2022-06-17  8:44     ` [Intel-gfx] " Christoph Hellwig
2022-06-17 21:58     ` Nicolin Chen
2022-06-17 21:58       ` Nicolin Chen
2022-06-20  2:57     ` Jason Gunthorpe
2022-06-20  2:57       ` [Intel-gfx] " Jason Gunthorpe
2022-06-20  2:57       ` Jason Gunthorpe
2022-06-20  6:32       ` Christoph Hellwig
2022-06-20  6:32         ` [Intel-gfx] " Christoph Hellwig
2022-06-20 15:39         ` Jason Gunthorpe
2022-06-20 15:39           ` [Intel-gfx] " Jason Gunthorpe
2022-06-20 15:39           ` Jason Gunthorpe
2022-06-21 21:21         ` Nicolin Chen
2022-06-21 21:21           ` [Intel-gfx] " Nicolin Chen
2022-06-21 21:21           ` Nicolin Chen
2022-06-24 13:56           ` Jason Gunthorpe
2022-06-24 13:56             ` [Intel-gfx] " Jason Gunthorpe
2022-06-24 13:56             ` Jason Gunthorpe
2022-06-24 19:22             ` Nicolin Chen
2022-06-24 19:22               ` Nicolin Chen
2022-06-24 19:30               ` Jason Gunthorpe
2022-06-24 19:30                 ` [Intel-gfx] " Jason Gunthorpe
2022-06-24 19:30                 ` Jason Gunthorpe
2022-06-24 20:12                 ` Nicolin Chen
2022-06-24 20:12                   ` Nicolin Chen
2022-06-24 22:42                   ` Jason Gunthorpe
2022-06-24 22:42                     ` [Intel-gfx] " Jason Gunthorpe
2022-06-24 22:42                     ` Jason Gunthorpe
2022-06-16 23:52 ` [RFT][PATCH v1 6/6] vfio: Replace phys_pfn with phys_page for vfio_pin_pages() Nicolin Chen
2022-06-16 23:52   ` Nicolin Chen
2022-06-17  8:54   ` Christoph Hellwig
2022-06-17  8:54     ` [Intel-gfx] " Christoph Hellwig
2022-06-17 22:06     ` Nicolin Chen
2022-06-17 22:06       ` Nicolin Chen
2022-06-19  6:18       ` Christoph Hellwig
2022-06-19  6:18         ` [Intel-gfx] " Christoph Hellwig
2022-06-19  6:41         ` Nicolin Chen
2022-06-19  6:41           ` Nicolin Chen
2022-06-20  3:00     ` Jason Gunthorpe
2022-06-20  3:00       ` [Intel-gfx] " Jason Gunthorpe
2022-06-20  3:00       ` Jason Gunthorpe
2022-06-20  5:51       ` Christoph Hellwig
2022-06-20  5:51         ` [Intel-gfx] " Christoph Hellwig
2022-06-20  6:37         ` Christoph Hellwig
2022-06-20  6:37           ` [Intel-gfx] " Christoph Hellwig
2022-06-20 15:36           ` Jason Gunthorpe
2022-06-20 15:36             ` [Intel-gfx] " Jason Gunthorpe
2022-06-20 15:36             ` Jason Gunthorpe
2022-06-21 21:47             ` Nicolin Chen [this message]
2022-06-21 21:47               ` [Intel-gfx] " Nicolin Chen
2022-06-21 21:47               ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YrI8eYsXemPgNBa2@Asurada-Nvidia \
    --to=nicolinc@nvidia.com \
    --cc=agordeev@linux.ibm.com \
    --cc=airlied@linux.ie \
    --cc=akrowiak@linux.ibm.com \
    --cc=alex.williamson@redhat.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=cohuck@redhat.com \
    --cc=corbet@lwn.net \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=farman@linux.ibm.com \
    --cc=freude@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=hch@infradead.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=intel-gvt-dev@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    --cc=jchrist@linux.ibm.com \
    --cc=jgg@nvidia.com \
    --cc=jjherne@linux.ibm.com \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=mjrosato@linux.ibm.com \
    --cc=oberpar@linux.ibm.com \
    --cc=pasic@linux.ibm.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=svens@linux.ibm.com \
    --cc=tvrtko.ursulin@linux.intel.com \
    --cc=vneethv@linux.ibm.com \
    --cc=zhenyuw@linux.intel.com \
    --cc=zhi.a.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.