All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@mellanox.com>
To: Christoph Hellwig <hch@lst.de>
Cc: "Jérôme Glisse" <jglisse@redhat.com>,
	"Ben Skeggs" <bskeggs@redhat.com>,
	"Felix Kuehling" <Felix.Kuehling@amd.com>,
	"Ralph Campbell" <rcampbell@nvidia.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"nouveau@lists.freedesktop.org" <nouveau@lists.freedesktop.org>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 07/13] mm: remove the page_shift member from struct hmm_range
Date: Tue, 30 Jul 2019 17:50:16 +0000	[thread overview]
Message-ID: <20190730175011.GL24038@mellanox.com> (raw)
In-Reply-To: <20190730131430.GC4566@lst.de>

On Tue, Jul 30, 2019 at 03:14:30PM +0200, Christoph Hellwig wrote:
> On Tue, Jul 30, 2019 at 12:55:17PM +0000, Jason Gunthorpe wrote:
> > I suspect this was added for the ODP conversion that does use both
> > page sizes. I think the ODP code for this is kind of broken, but I
> > haven't delved into that..
> > 
> > The challenge is that the driver needs to know what page size to
> > configure the hardware before it does any range stuff.
> > 
> > The other challenge is that the HW is configured to do only one page
> > size, and if the underlying CPU page side changes it goes south.
> > 
> > What I would prefer is if the driver could somehow dynamically adjust
> > the the page size after each dma map, but I don't know if ODP HW can
> > do that.
> > 
> > Since this is all driving toward making ODP use this maybe we should
> > keep this API? 
> > 
> > I'm not sure I can loose the crappy huge page support in ODP.
> 
> The problem is that I see no way how to use the current API.  To know
> the huge page size you need to have the vma, and the current API
> doesn't require a vma to be passed in.

The way ODP seems to work is once in hugetlb mode the dma addresses
must give huge pages or the page fault will be failed. I think that is
a terrible design, but this is how the driver is ..

So, from this HMM perspective if the caller asked for huge pages then
the results have to be all huge pages or a hard failure.

It is not negotiated as an optimization like you are thinking.

[note, I haven't yet checked carefully how this works in ODP, every
 time I look at parts of it the thing seems crazy]

> That's why I suggested an api where we pass in a flag that huge pages
> are ok into hmm_range_fault, and it then could pass the shift out, and
> limits itself to a single vma (which it normally doesn't, that is an
> additional complication).  But all this seems really awkward in terms
> of an API still.  AFAIK ODP is only used by mlx5, and mlx5 unlike other
> IB HCAs can use scatterlist style MRs with variable length per entry,
> so even if we pass multiple pages per entry from hmm it could coalesce
> them.  

When the driver takes faults it has to repair the MR mapping, and
fixing a page in the middle of a variable length SGL would be pretty
complicated. Even so, I don't think the SG_GAPs feature and ODP are
compatible - I'm pretty sure ODP has to be page lists not SGL..

However, what ODP can maybe do is represent a full multi-level page
table, so we could have 2M entries that map to a single DMA or to
another page table w/ 4k pages (have to check on this)

But the driver isn't set up to do that right now.

> The best API for mlx4 would of course be to pass a biovec-style
> variable length structure that hmm_fault could fill out, but that would
> be a major restructure.

It would work, but the driver has to expand that into a page list
right awayhow.

We can't even dma map the biovec with today's dma API as it needs the
ability to remap on a page granularity.

Jason

  reply	other threads:[~2019-07-30 17:50 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-30  5:51 hmm_range_fault related fixes and legacy API removal v3 Christoph Hellwig
2019-07-30  5:51 ` [PATCH 01/13] amdgpu: remove -EAGAIN handling for hmm_range_fault Christoph Hellwig
2019-07-30 12:33   ` Jason Gunthorpe
2019-07-31 13:13   ` Kuehling, Felix
2019-07-30  5:51 ` [PATCH 02/13] amdgpu: don't initialize range->list in amdgpu_hmm_init_range Christoph Hellwig
2019-07-30  5:51   ` Christoph Hellwig
2019-07-30 12:33   ` Jason Gunthorpe
2019-07-30 12:33     ` Jason Gunthorpe
2019-07-31 13:25   ` Kuehling, Felix
2019-07-31 17:02     ` Jason Gunthorpe
2019-07-31 17:02       ` Jason Gunthorpe
2019-07-30  5:51 ` [PATCH 03/13] nouveau: pass struct nouveau_svmm to nouveau_range_fault Christoph Hellwig
2019-07-30  5:51   ` Christoph Hellwig
2019-07-30 12:35   ` Jason Gunthorpe
2019-07-30 13:10     ` Christoph Hellwig
2019-07-30 13:14       ` Jason Gunthorpe
2019-07-30 14:40         ` Christoph Hellwig
2019-07-30  5:51 ` [PATCH 04/13] mm: remove the pgmap field from struct hmm_vma_walk Christoph Hellwig
2019-07-30  5:51   ` Christoph Hellwig
2019-07-30  5:51 ` [PATCH 05/13] mm: remove the unused vma argument to hmm_range_dma_unmap Christoph Hellwig
2019-07-30  5:51   ` Christoph Hellwig
2019-07-30 12:45   ` Jason Gunthorpe
2019-07-30  5:51 ` [PATCH 06/13] mm: remove superflous arguments from hmm_range_register Christoph Hellwig
2019-07-30 17:51   ` Jason Gunthorpe
2019-07-31 13:31   ` Kuehling, Felix
2019-07-31 13:31     ` Kuehling, Felix
2019-07-30  5:51 ` [PATCH 07/13] mm: remove the page_shift member from struct hmm_range Christoph Hellwig
2019-07-30  5:51   ` Christoph Hellwig
2019-07-30 12:55   ` Jason Gunthorpe
2019-07-30 13:14     ` Christoph Hellwig
2019-07-30 17:50       ` Jason Gunthorpe [this message]
2019-08-01  6:49         ` Christoph Hellwig
2019-07-31 13:38   ` Kuehling, Felix
2019-07-30  5:51 ` [PATCH 08/13] mm: remove the mask variable in hmm_vma_walk_hugetlb_entry Christoph Hellwig
2019-07-30  5:51   ` Christoph Hellwig
2019-07-30 17:39   ` Jason Gunthorpe
2019-07-30 17:39     ` Jason Gunthorpe
2019-07-31  1:01   ` Ralph Campbell
2019-07-31  1:01     ` Ralph Campbell
2019-07-30  5:51 ` [PATCH 09/13] mm: don't abuse pte_index() in hmm_vma_handle_pmd Christoph Hellwig
2019-07-30  5:51   ` Christoph Hellwig
2019-07-30  5:52 ` [PATCH 10/13] mm: only define hmm_vma_walk_pud if needed Christoph Hellwig
2019-07-30  5:52   ` Christoph Hellwig
2019-07-30 18:02   ` Jason Gunthorpe
2019-07-30  5:52 ` [PATCH 11/13] mm: cleanup the hmm_vma_handle_pmd stub Christoph Hellwig
2019-07-30  5:52   ` Christoph Hellwig
2019-07-30 17:53   ` Jason Gunthorpe
2019-07-30 17:53     ` Jason Gunthorpe
2019-08-01  7:01     ` Christoph Hellwig
2019-07-30  5:52 ` [PATCH 12/13] mm: cleanup the hmm_vma_walk_hugetlb_entry stub Christoph Hellwig
2019-07-30  5:52   ` Christoph Hellwig
2019-07-30 18:02   ` Jason Gunthorpe
2019-07-30  5:52 ` [PATCH 13/13] mm: allow HMM_MIRROR on all architectures with MMU Christoph Hellwig
2019-07-30  5:52   ` Christoph Hellwig
2019-07-30 18:03   ` Jason Gunthorpe
2019-07-30 18:04     ` Jason Gunthorpe
2019-08-01  7:04       ` Christoph Hellwig
2019-08-01  7:04         ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190730175011.GL24038@mellanox.com \
    --to=jgg@mellanox.com \
    --cc=Felix.Kuehling@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=bskeggs@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=jglisse@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=rcampbell@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.